Test Report: KVM_Linux_crio 17734

                    
                      1d1c6f3c143e2d28fe63167ba90e3265538c6a3a:2023-12-12:32255
                    
                

Test fail (26/305)

Order failed test Duration
35 TestAddons/parallel/Ingress 163.12
48 TestAddons/StoppedEnableDisable 155.15
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.43
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 174.14
212 TestMultiNode/serial/PingHostFrom2Pods 3.33
219 TestMultiNode/serial/RestartKeepsNodes 686.03
221 TestMultiNode/serial/StopMultiNode 143.33
228 TestPreload 186.65
234 TestRunningBinaryUpgrade 167.13
243 TestStoppedBinaryUpgrade/Upgrade 290.14
333 TestStartStop/group/no-preload/serial/Stop 140.43
336 TestStartStop/group/embed-certs/serial/Stop 140.06
339 TestStartStop/group/old-k8s-version/serial/Stop 139.72
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.25
343 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
346 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.39
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.35
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.28
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.2
354 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.3
355 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 459.34
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 448.64
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 300.24
358 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 229.43
x
+
TestAddons/parallel/Ingress (163.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-459174 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-459174 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-459174 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d66934a8-9889-4d2f-86bc-fef56154d835] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d66934a8-9889-4d2f-86bc-fef56154d835] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.020687364s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459174 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.882917667s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-459174 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.145
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-459174 addons disable ingress-dns --alsologtostderr -v=1: (1.385134191s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-459174 addons disable ingress --alsologtostderr -v=1: (7.883212091s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-459174 -n addons-459174
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-459174 logs -n 25: (1.338473915s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |                     |
	|         | -p download-only-931277                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC | 12 Dec 23 19:57 UTC |
	| delete  | -p download-only-931277                                                                     | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC | 12 Dec 23 19:57 UTC |
	| delete  | -p download-only-931277                                                                     | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC | 12 Dec 23 19:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-839741 | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC |                     |
	|         | binary-mirror-839741                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36279                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-839741                                                                     | binary-mirror-839741 | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC | 12 Dec 23 19:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC |                     |
	|         | addons-459174                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC |                     |
	|         | addons-459174                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-459174 --wait=true                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:57 UTC | 12 Dec 23 19:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	|         | addons-459174                                                                               |                      |         |         |                     |                     |
	| addons  | addons-459174 addons                                                                        | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	|         | -p addons-459174                                                                            |                      |         |         |                     |                     |
	| ip      | addons-459174 ip                                                                            | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	| addons  | addons-459174 addons disable                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	|         | -p addons-459174                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 19:59 UTC | 12 Dec 23 19:59 UTC |
	|         | addons-459174                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-459174 ssh cat                                                                       | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:00 UTC | 12 Dec 23 20:00 UTC |
	|         | /opt/local-path-provisioner/pvc-b127a7ff-99c7-4435-af5f-944d91801ed2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-459174 addons disable                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:00 UTC | 12 Dec 23 20:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-459174 ssh curl -s                                                                   | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-459174 addons disable                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:00 UTC | 12 Dec 23 20:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-459174 addons                                                                        | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:01 UTC | 12 Dec 23 20:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-459174 addons                                                                        | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:01 UTC | 12 Dec 23 20:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-459174 ip                                                                            | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:02 UTC | 12 Dec 23 20:02 UTC |
	| addons  | addons-459174 addons disable                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:02 UTC | 12 Dec 23 20:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-459174 addons disable                                                                | addons-459174        | jenkins | v1.32.0 | 12 Dec 23 20:02 UTC | 12 Dec 23 20:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 19:57:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:57:11.222894   16875 out.go:296] Setting OutFile to fd 1 ...
	I1212 19:57:11.223019   16875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:11.223032   16875 out.go:309] Setting ErrFile to fd 2...
	I1212 19:57:11.223037   16875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:11.223303   16875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 19:57:11.223967   16875 out.go:303] Setting JSON to false
	I1212 19:57:11.224811   16875 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2385,"bootTime":1702408646,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:57:11.224874   16875 start.go:138] virtualization: kvm guest
	I1212 19:57:11.227124   16875 out.go:177] * [addons-459174] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 19:57:11.228613   16875 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 19:57:11.228609   16875 notify.go:220] Checking for updates...
	I1212 19:57:11.229907   16875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:57:11.231230   16875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 19:57:11.232487   16875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 19:57:11.233708   16875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:57:11.234832   16875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:57:11.236107   16875 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 19:57:11.269362   16875 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 19:57:11.270862   16875 start.go:298] selected driver: kvm2
	I1212 19:57:11.270875   16875 start.go:902] validating driver "kvm2" against <nil>
	I1212 19:57:11.270885   16875 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:57:11.271632   16875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:11.271711   16875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 19:57:11.286070   16875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 19:57:11.286124   16875 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 19:57:11.286333   16875 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:57:11.286398   16875 cni.go:84] Creating CNI manager for ""
	I1212 19:57:11.286410   16875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:57:11.286418   16875 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:57:11.286428   16875 start_flags.go:323] config:
	{Name:addons-459174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-459174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:57:11.286550   16875 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:57:11.288406   16875 out.go:177] * Starting control plane node addons-459174 in cluster addons-459174
	I1212 19:57:11.289887   16875 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 19:57:11.289924   16875 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 19:57:11.289935   16875 cache.go:56] Caching tarball of preloaded images
	I1212 19:57:11.290017   16875 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 19:57:11.290030   16875 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 19:57:11.290362   16875 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/config.json ...
	I1212 19:57:11.290389   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/config.json: {Name:mkd0b6bfff6c6c4f8db54c6f914bde69a440171b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:11.290539   16875 start.go:365] acquiring machines lock for addons-459174: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 19:57:11.290628   16875 start.go:369] acquired machines lock for "addons-459174" in 43.912µs
	I1212 19:57:11.290655   16875 start.go:93] Provisioning new machine with config: &{Name:addons-459174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-459174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:57:11.290737   16875 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 19:57:11.292961   16875 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1212 19:57:11.293109   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:57:11.293153   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:57:11.307127   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I1212 19:57:11.307514   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:57:11.308023   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:57:11.308053   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:57:11.308549   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:57:11.309756   16875 main.go:141] libmachine: (addons-459174) Calling .GetMachineName
	I1212 19:57:11.309933   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:11.310107   16875 start.go:159] libmachine.API.Create for "addons-459174" (driver="kvm2")
	I1212 19:57:11.310141   16875 client.go:168] LocalClient.Create starting
	I1212 19:57:11.310209   16875 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem
	I1212 19:57:11.358550   16875 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem
	I1212 19:57:11.491640   16875 main.go:141] libmachine: Running pre-create checks...
	I1212 19:57:11.491663   16875 main.go:141] libmachine: (addons-459174) Calling .PreCreateCheck
	I1212 19:57:11.492190   16875 main.go:141] libmachine: (addons-459174) Calling .GetConfigRaw
	I1212 19:57:11.492641   16875 main.go:141] libmachine: Creating machine...
	I1212 19:57:11.492657   16875 main.go:141] libmachine: (addons-459174) Calling .Create
	I1212 19:57:11.492820   16875 main.go:141] libmachine: (addons-459174) Creating KVM machine...
	I1212 19:57:11.494047   16875 main.go:141] libmachine: (addons-459174) DBG | found existing default KVM network
	I1212 19:57:11.494771   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:11.494634   16896 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000149900}
	I1212 19:57:11.500063   16875 main.go:141] libmachine: (addons-459174) DBG | trying to create private KVM network mk-addons-459174 192.168.39.0/24...
	I1212 19:57:11.568038   16875 main.go:141] libmachine: (addons-459174) DBG | private KVM network mk-addons-459174 192.168.39.0/24 created
	I1212 19:57:11.568074   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:11.568020   16896 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 19:57:11.568101   16875 main.go:141] libmachine: (addons-459174) Setting up store path in /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174 ...
	I1212 19:57:11.568124   16875 main.go:141] libmachine: (addons-459174) Building disk image from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 19:57:11.568146   16875 main.go:141] libmachine: (addons-459174) Downloading /home/jenkins/minikube-integration/17734-9188/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 19:57:11.776869   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:11.776716   16896 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa...
	I1212 19:57:11.854338   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:11.854162   16896 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/addons-459174.rawdisk...
	I1212 19:57:11.854385   16875 main.go:141] libmachine: (addons-459174) DBG | Writing magic tar header
	I1212 19:57:11.854404   16875 main.go:141] libmachine: (addons-459174) DBG | Writing SSH key tar header
	I1212 19:57:11.854418   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:11.854355   16896 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174 ...
	I1212 19:57:11.854587   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174
	I1212 19:57:11.854636   16875 main.go:141] libmachine: (addons-459174) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174 (perms=drwx------)
	I1212 19:57:11.854657   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines
	I1212 19:57:11.854681   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 19:57:11.854697   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188
	I1212 19:57:11.854712   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 19:57:11.854728   16875 main.go:141] libmachine: (addons-459174) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines (perms=drwxr-xr-x)
	I1212 19:57:11.854740   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home/jenkins
	I1212 19:57:11.854755   16875 main.go:141] libmachine: (addons-459174) DBG | Checking permissions on dir: /home
	I1212 19:57:11.854767   16875 main.go:141] libmachine: (addons-459174) DBG | Skipping /home - not owner
	I1212 19:57:11.854783   16875 main.go:141] libmachine: (addons-459174) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube (perms=drwxr-xr-x)
	I1212 19:57:11.854804   16875 main.go:141] libmachine: (addons-459174) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188 (perms=drwxrwxr-x)
	I1212 19:57:11.854815   16875 main.go:141] libmachine: (addons-459174) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 19:57:11.854831   16875 main.go:141] libmachine: (addons-459174) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 19:57:11.854843   16875 main.go:141] libmachine: (addons-459174) Creating domain...
	I1212 19:57:11.855764   16875 main.go:141] libmachine: (addons-459174) define libvirt domain using xml: 
	I1212 19:57:11.855787   16875 main.go:141] libmachine: (addons-459174) <domain type='kvm'>
	I1212 19:57:11.855798   16875 main.go:141] libmachine: (addons-459174)   <name>addons-459174</name>
	I1212 19:57:11.855807   16875 main.go:141] libmachine: (addons-459174)   <memory unit='MiB'>4000</memory>
	I1212 19:57:11.855822   16875 main.go:141] libmachine: (addons-459174)   <vcpu>2</vcpu>
	I1212 19:57:11.855835   16875 main.go:141] libmachine: (addons-459174)   <features>
	I1212 19:57:11.855845   16875 main.go:141] libmachine: (addons-459174)     <acpi/>
	I1212 19:57:11.855858   16875 main.go:141] libmachine: (addons-459174)     <apic/>
	I1212 19:57:11.855872   16875 main.go:141] libmachine: (addons-459174)     <pae/>
	I1212 19:57:11.855888   16875 main.go:141] libmachine: (addons-459174)     
	I1212 19:57:11.855902   16875 main.go:141] libmachine: (addons-459174)   </features>
	I1212 19:57:11.855915   16875 main.go:141] libmachine: (addons-459174)   <cpu mode='host-passthrough'>
	I1212 19:57:11.855927   16875 main.go:141] libmachine: (addons-459174)   
	I1212 19:57:11.855936   16875 main.go:141] libmachine: (addons-459174)   </cpu>
	I1212 19:57:11.855949   16875 main.go:141] libmachine: (addons-459174)   <os>
	I1212 19:57:11.855962   16875 main.go:141] libmachine: (addons-459174)     <type>hvm</type>
	I1212 19:57:11.855981   16875 main.go:141] libmachine: (addons-459174)     <boot dev='cdrom'/>
	I1212 19:57:11.855995   16875 main.go:141] libmachine: (addons-459174)     <boot dev='hd'/>
	I1212 19:57:11.856008   16875 main.go:141] libmachine: (addons-459174)     <bootmenu enable='no'/>
	I1212 19:57:11.856025   16875 main.go:141] libmachine: (addons-459174)   </os>
	I1212 19:57:11.856037   16875 main.go:141] libmachine: (addons-459174)   <devices>
	I1212 19:57:11.856052   16875 main.go:141] libmachine: (addons-459174)     <disk type='file' device='cdrom'>
	I1212 19:57:11.856073   16875 main.go:141] libmachine: (addons-459174)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/boot2docker.iso'/>
	I1212 19:57:11.856088   16875 main.go:141] libmachine: (addons-459174)       <target dev='hdc' bus='scsi'/>
	I1212 19:57:11.856099   16875 main.go:141] libmachine: (addons-459174)       <readonly/>
	I1212 19:57:11.856110   16875 main.go:141] libmachine: (addons-459174)     </disk>
	I1212 19:57:11.856123   16875 main.go:141] libmachine: (addons-459174)     <disk type='file' device='disk'>
	I1212 19:57:11.856142   16875 main.go:141] libmachine: (addons-459174)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 19:57:11.856162   16875 main.go:141] libmachine: (addons-459174)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/addons-459174.rawdisk'/>
	I1212 19:57:11.856184   16875 main.go:141] libmachine: (addons-459174)       <target dev='hda' bus='virtio'/>
	I1212 19:57:11.856195   16875 main.go:141] libmachine: (addons-459174)     </disk>
	I1212 19:57:11.856206   16875 main.go:141] libmachine: (addons-459174)     <interface type='network'>
	I1212 19:57:11.856220   16875 main.go:141] libmachine: (addons-459174)       <source network='mk-addons-459174'/>
	I1212 19:57:11.856233   16875 main.go:141] libmachine: (addons-459174)       <model type='virtio'/>
	I1212 19:57:11.856263   16875 main.go:141] libmachine: (addons-459174)     </interface>
	I1212 19:57:11.856291   16875 main.go:141] libmachine: (addons-459174)     <interface type='network'>
	I1212 19:57:11.856309   16875 main.go:141] libmachine: (addons-459174)       <source network='default'/>
	I1212 19:57:11.856324   16875 main.go:141] libmachine: (addons-459174)       <model type='virtio'/>
	I1212 19:57:11.856339   16875 main.go:141] libmachine: (addons-459174)     </interface>
	I1212 19:57:11.856355   16875 main.go:141] libmachine: (addons-459174)     <serial type='pty'>
	I1212 19:57:11.856370   16875 main.go:141] libmachine: (addons-459174)       <target port='0'/>
	I1212 19:57:11.856384   16875 main.go:141] libmachine: (addons-459174)     </serial>
	I1212 19:57:11.856399   16875 main.go:141] libmachine: (addons-459174)     <console type='pty'>
	I1212 19:57:11.856415   16875 main.go:141] libmachine: (addons-459174)       <target type='serial' port='0'/>
	I1212 19:57:11.856429   16875 main.go:141] libmachine: (addons-459174)     </console>
	I1212 19:57:11.856449   16875 main.go:141] libmachine: (addons-459174)     <rng model='virtio'>
	I1212 19:57:11.856469   16875 main.go:141] libmachine: (addons-459174)       <backend model='random'>/dev/random</backend>
	I1212 19:57:11.856483   16875 main.go:141] libmachine: (addons-459174)     </rng>
	I1212 19:57:11.856492   16875 main.go:141] libmachine: (addons-459174)     
	I1212 19:57:11.856505   16875 main.go:141] libmachine: (addons-459174)     
	I1212 19:57:11.856517   16875 main.go:141] libmachine: (addons-459174)   </devices>
	I1212 19:57:11.856530   16875 main.go:141] libmachine: (addons-459174) </domain>
	I1212 19:57:11.856546   16875 main.go:141] libmachine: (addons-459174) 
	I1212 19:57:11.862056   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:95:98:3a in network default
	I1212 19:57:11.862714   16875 main.go:141] libmachine: (addons-459174) Ensuring networks are active...
	I1212 19:57:11.862735   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:11.863420   16875 main.go:141] libmachine: (addons-459174) Ensuring network default is active
	I1212 19:57:11.863842   16875 main.go:141] libmachine: (addons-459174) Ensuring network mk-addons-459174 is active
	I1212 19:57:11.864631   16875 main.go:141] libmachine: (addons-459174) Getting domain xml...
	I1212 19:57:11.866067   16875 main.go:141] libmachine: (addons-459174) Creating domain...
	I1212 19:57:13.284017   16875 main.go:141] libmachine: (addons-459174) Waiting to get IP...
	I1212 19:57:13.284736   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:13.285111   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:13.285151   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:13.285102   16896 retry.go:31] will retry after 239.249724ms: waiting for machine to come up
	I1212 19:57:13.525542   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:13.525954   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:13.525983   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:13.525903   16896 retry.go:31] will retry after 242.485349ms: waiting for machine to come up
	I1212 19:57:13.770314   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:13.770723   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:13.770752   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:13.770678   16896 retry.go:31] will retry after 438.092543ms: waiting for machine to come up
	I1212 19:57:14.210297   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:14.210731   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:14.210763   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:14.210684   16896 retry.go:31] will retry after 542.696433ms: waiting for machine to come up
	I1212 19:57:14.755319   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:14.755672   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:14.755706   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:14.755628   16896 retry.go:31] will retry after 604.694256ms: waiting for machine to come up
	I1212 19:57:15.361318   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:15.361788   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:15.361814   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:15.361740   16896 retry.go:31] will retry after 928.246981ms: waiting for machine to come up
	I1212 19:57:16.291160   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:16.291606   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:16.291636   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:16.291556   16896 retry.go:31] will retry after 830.782613ms: waiting for machine to come up
	I1212 19:57:17.123497   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:17.123906   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:17.123934   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:17.123850   16896 retry.go:31] will retry after 1.254626454s: waiting for machine to come up
	I1212 19:57:18.380294   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:18.380705   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:18.380739   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:18.380647   16896 retry.go:31] will retry after 1.74053727s: waiting for machine to come up
	I1212 19:57:20.123640   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:20.124043   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:20.124066   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:20.123996   16896 retry.go:31] will retry after 2.236410577s: waiting for machine to come up
	I1212 19:57:22.361708   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:22.362062   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:22.362094   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:22.362025   16896 retry.go:31] will retry after 2.057831719s: waiting for machine to come up
	I1212 19:57:24.422193   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:24.422587   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:24.422615   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:24.422533   16896 retry.go:31] will retry after 2.657554435s: waiting for machine to come up
	I1212 19:57:27.081788   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:27.082129   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:27.082172   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:27.082067   16896 retry.go:31] will retry after 4.484758443s: waiting for machine to come up
	I1212 19:57:31.568374   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:31.568717   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find current IP address of domain addons-459174 in network mk-addons-459174
	I1212 19:57:31.568744   16875 main.go:141] libmachine: (addons-459174) DBG | I1212 19:57:31.568676   16896 retry.go:31] will retry after 5.16574906s: waiting for machine to come up
	I1212 19:57:36.735433   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:36.735863   16875 main.go:141] libmachine: (addons-459174) Found IP for machine: 192.168.39.145
	I1212 19:57:36.735888   16875 main.go:141] libmachine: (addons-459174) Reserving static IP address...
	I1212 19:57:36.735905   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has current primary IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:36.736273   16875 main.go:141] libmachine: (addons-459174) DBG | unable to find host DHCP lease matching {name: "addons-459174", mac: "52:54:00:e7:fb:c5", ip: "192.168.39.145"} in network mk-addons-459174
	I1212 19:57:36.806261   16875 main.go:141] libmachine: (addons-459174) DBG | Getting to WaitForSSH function...
	I1212 19:57:36.806296   16875 main.go:141] libmachine: (addons-459174) Reserved static IP address: 192.168.39.145
	I1212 19:57:36.806312   16875 main.go:141] libmachine: (addons-459174) Waiting for SSH to be available...
	I1212 19:57:36.808535   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:36.808880   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:36.808915   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:36.809021   16875 main.go:141] libmachine: (addons-459174) DBG | Using SSH client type: external
	I1212 19:57:36.809050   16875 main.go:141] libmachine: (addons-459174) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa (-rw-------)
	I1212 19:57:36.809100   16875 main.go:141] libmachine: (addons-459174) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 19:57:36.809133   16875 main.go:141] libmachine: (addons-459174) DBG | About to run SSH command:
	I1212 19:57:36.809150   16875 main.go:141] libmachine: (addons-459174) DBG | exit 0
	I1212 19:57:36.903168   16875 main.go:141] libmachine: (addons-459174) DBG | SSH cmd err, output: <nil>: 
	I1212 19:57:36.903486   16875 main.go:141] libmachine: (addons-459174) KVM machine creation complete!
	I1212 19:57:36.903765   16875 main.go:141] libmachine: (addons-459174) Calling .GetConfigRaw
	I1212 19:57:36.904362   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:36.904546   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:36.904700   16875 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 19:57:36.904718   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:57:36.905829   16875 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 19:57:36.905842   16875 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 19:57:36.905848   16875 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 19:57:36.905854   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:36.907748   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:36.908068   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:36.908090   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:36.908258   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:36.908422   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:36.908553   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:36.908686   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:36.908840   16875 main.go:141] libmachine: Using SSH client type: native
	I1212 19:57:36.909217   16875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 19:57:36.909231   16875 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 19:57:37.022563   16875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:37.022591   16875 main.go:141] libmachine: Detecting the provisioner...
	I1212 19:57:37.022600   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:37.025134   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.025423   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.025451   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.025560   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:37.025764   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.025940   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.026073   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:37.026254   16875 main.go:141] libmachine: Using SSH client type: native
	I1212 19:57:37.026571   16875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 19:57:37.026582   16875 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 19:57:37.139847   16875 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 19:57:37.139964   16875 main.go:141] libmachine: found compatible host: buildroot
	I1212 19:57:37.139977   16875 main.go:141] libmachine: Provisioning with buildroot...
	I1212 19:57:37.139985   16875 main.go:141] libmachine: (addons-459174) Calling .GetMachineName
	I1212 19:57:37.140277   16875 buildroot.go:166] provisioning hostname "addons-459174"
	I1212 19:57:37.140308   16875 main.go:141] libmachine: (addons-459174) Calling .GetMachineName
	I1212 19:57:37.140480   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:37.142863   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.143193   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.143220   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.143428   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:37.143618   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.143804   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.143956   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:37.144151   16875 main.go:141] libmachine: Using SSH client type: native
	I1212 19:57:37.144452   16875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 19:57:37.144466   16875 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-459174 && echo "addons-459174" | sudo tee /etc/hostname
	I1212 19:57:37.273124   16875 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459174
	
	I1212 19:57:37.273155   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:37.275747   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.276099   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.276140   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.276313   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:37.276497   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.276688   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.276850   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:37.277041   16875 main.go:141] libmachine: Using SSH client type: native
	I1212 19:57:37.277376   16875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 19:57:37.277400   16875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-459174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-459174/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-459174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:57:37.403435   16875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:57:37.403463   16875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 19:57:37.403517   16875 buildroot.go:174] setting up certificates
	I1212 19:57:37.403533   16875 provision.go:83] configureAuth start
	I1212 19:57:37.403548   16875 main.go:141] libmachine: (addons-459174) Calling .GetMachineName
	I1212 19:57:37.403815   16875 main.go:141] libmachine: (addons-459174) Calling .GetIP
	I1212 19:57:37.406487   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.406849   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.406879   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.406997   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:37.410076   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.410411   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.410447   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.410576   16875 provision.go:138] copyHostCerts
	I1212 19:57:37.410646   16875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 19:57:37.410768   16875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 19:57:37.410853   16875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 19:57:37.410915   16875 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.addons-459174 san=[192.168.39.145 192.168.39.145 localhost 127.0.0.1 minikube addons-459174]
	I1212 19:57:37.571519   16875 provision.go:172] copyRemoteCerts
	I1212 19:57:37.571583   16875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:57:37.571611   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:37.574136   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.574428   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.574459   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.574583   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:37.574782   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.574916   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:37.575065   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:57:37.660389   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 19:57:37.683943   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 19:57:37.706893   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:57:37.730394   16875 provision.go:86] duration metric: configureAuth took 326.844364ms
	I1212 19:57:37.730422   16875 buildroot.go:189] setting minikube options for container-runtime
	I1212 19:57:37.730621   16875 config.go:182] Loaded profile config "addons-459174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 19:57:37.730715   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:37.733068   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.733391   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:37.733421   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:37.733571   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:37.733736   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.733924   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:37.734074   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:37.734217   16875 main.go:141] libmachine: Using SSH client type: native
	I1212 19:57:37.734538   16875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 19:57:37.734553   16875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 19:57:38.064850   16875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 19:57:38.064886   16875 main.go:141] libmachine: Checking connection to Docker...
	I1212 19:57:38.064910   16875 main.go:141] libmachine: (addons-459174) Calling .GetURL
	I1212 19:57:38.065968   16875 main.go:141] libmachine: (addons-459174) DBG | Using libvirt version 6000000
	I1212 19:57:38.068235   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.068630   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.068669   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.068831   16875 main.go:141] libmachine: Docker is up and running!
	I1212 19:57:38.068844   16875 main.go:141] libmachine: Reticulating splines...
	I1212 19:57:38.068850   16875 client.go:171] LocalClient.Create took 26.758696043s
	I1212 19:57:38.068871   16875 start.go:167] duration metric: libmachine.API.Create for "addons-459174" took 26.7587684s
	I1212 19:57:38.068892   16875 start.go:300] post-start starting for "addons-459174" (driver="kvm2")
	I1212 19:57:38.068908   16875 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:57:38.068928   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:38.069200   16875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:57:38.069223   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:38.071126   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.071485   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.071508   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.071721   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:38.071898   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:38.072061   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:38.072207   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:57:38.161630   16875 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:57:38.165925   16875 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 19:57:38.165952   16875 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 19:57:38.166033   16875 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 19:57:38.166058   16875 start.go:303] post-start completed in 97.157159ms
	I1212 19:57:38.166091   16875 main.go:141] libmachine: (addons-459174) Calling .GetConfigRaw
	I1212 19:57:38.166656   16875 main.go:141] libmachine: (addons-459174) Calling .GetIP
	I1212 19:57:38.169017   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.169382   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.169402   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.169681   16875 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/config.json ...
	I1212 19:57:38.169859   16875 start.go:128] duration metric: createHost completed in 26.879111556s
	I1212 19:57:38.169883   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:38.171887   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.172239   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.172276   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.172386   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:38.172562   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:38.172724   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:38.172857   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:38.172987   16875 main.go:141] libmachine: Using SSH client type: native
	I1212 19:57:38.173329   16875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 19:57:38.173342   16875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 19:57:38.293272   16875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702411058.278405137
	
	I1212 19:57:38.293301   16875 fix.go:206] guest clock: 1702411058.278405137
	I1212 19:57:38.293309   16875 fix.go:219] Guest: 2023-12-12 19:57:38.278405137 +0000 UTC Remote: 2023-12-12 19:57:38.169871714 +0000 UTC m=+26.992853974 (delta=108.533423ms)
	I1212 19:57:38.293328   16875 fix.go:190] guest clock delta is within tolerance: 108.533423ms
	I1212 19:57:38.293333   16875 start.go:83] releasing machines lock for "addons-459174", held for 27.002692103s
	I1212 19:57:38.293353   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:38.293621   16875 main.go:141] libmachine: (addons-459174) Calling .GetIP
	I1212 19:57:38.296282   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.296690   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.296714   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.296893   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:38.297363   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:38.297536   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:57:38.297624   16875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:57:38.297675   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:38.297775   16875 ssh_runner.go:195] Run: cat /version.json
	I1212 19:57:38.297805   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:57:38.300166   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.300326   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.300504   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.300526   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.300753   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:38.300780   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:38.300810   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:38.300961   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:57:38.300968   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:38.301126   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:38.301139   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:57:38.301305   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:57:38.301314   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:57:38.301455   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:57:38.412789   16875 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:38.418563   16875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 19:57:38.574267   16875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:57:38.581288   16875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:57:38.581358   16875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:57:38.595657   16875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 19:57:38.595676   16875 start.go:475] detecting cgroup driver to use...
	I1212 19:57:38.595742   16875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:57:38.607987   16875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:57:38.619287   16875 docker.go:203] disabling cri-docker service (if available) ...
	I1212 19:57:38.619354   16875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:57:38.631322   16875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:57:38.643331   16875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:57:38.749516   16875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:57:38.867581   16875 docker.go:219] disabling docker service ...
	I1212 19:57:38.867689   16875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:57:38.880792   16875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:57:38.892860   16875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:57:38.995986   16875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:57:39.096039   16875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:57:39.108166   16875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:57:39.124558   16875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 19:57:39.124625   16875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:57:39.133378   16875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 19:57:39.133456   16875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:57:39.142369   16875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:57:39.152334   16875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:57:39.161566   16875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:57:39.170977   16875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:57:39.178846   16875 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 19:57:39.178898   16875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 19:57:39.190663   16875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:57:39.198723   16875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:57:39.295952   16875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 19:57:39.465252   16875 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 19:57:39.465328   16875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 19:57:39.469933   16875 start.go:543] Will wait 60s for crictl version
	I1212 19:57:39.469977   16875 ssh_runner.go:195] Run: which crictl
	I1212 19:57:39.475696   16875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 19:57:39.515975   16875 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 19:57:39.516110   16875 ssh_runner.go:195] Run: crio --version
	I1212 19:57:39.560884   16875 ssh_runner.go:195] Run: crio --version
	I1212 19:57:39.606656   16875 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 19:57:39.608078   16875 main.go:141] libmachine: (addons-459174) Calling .GetIP
	I1212 19:57:39.610722   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:39.611036   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:57:39.611068   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:57:39.611255   16875 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 19:57:39.615285   16875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:57:39.628352   16875 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 19:57:39.628406   16875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:57:39.662569   16875 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 19:57:39.662660   16875 ssh_runner.go:195] Run: which lz4
	I1212 19:57:39.666943   16875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 19:57:39.670723   16875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 19:57:39.670753   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 19:57:41.525921   16875 crio.go:444] Took 1.859011 seconds to copy over tarball
	I1212 19:57:41.525986   16875 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 19:57:44.872455   16875 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.346443841s)
	I1212 19:57:44.872485   16875 crio.go:451] Took 3.346534 seconds to extract the tarball
	I1212 19:57:44.872496   16875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 19:57:44.915486   16875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:57:44.983658   16875 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 19:57:44.983681   16875 cache_images.go:84] Images are preloaded, skipping loading
	I1212 19:57:44.983753   16875 ssh_runner.go:195] Run: crio config
	I1212 19:57:45.048506   16875 cni.go:84] Creating CNI manager for ""
	I1212 19:57:45.048531   16875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:57:45.048550   16875 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 19:57:45.048573   16875 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-459174 NodeName:addons-459174 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:57:45.048807   16875 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-459174"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:57:45.048965   16875 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-459174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-459174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 19:57:45.049047   16875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 19:57:45.057937   16875 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 19:57:45.057996   16875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:57:45.066467   16875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1212 19:57:45.082704   16875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 19:57:45.098080   16875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 19:57:45.113488   16875 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I1212 19:57:45.117320   16875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:57:45.129810   16875 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174 for IP: 192.168.39.145
	I1212 19:57:45.129849   16875 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.129973   16875 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 19:57:45.175660   16875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt ...
	I1212 19:57:45.175686   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt: {Name:mk149ad59a6db34cb2e8a98bd802cb19af3ed2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.175837   16875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key ...
	I1212 19:57:45.175848   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key: {Name:mk5fc7dbbb23719af192dd5b46eeb683be204530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.175939   16875 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 19:57:45.326376   16875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt ...
	I1212 19:57:45.326406   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt: {Name:mkfbfd27bb9de615b0db401af5de12979c0f020d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.326552   16875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key ...
	I1212 19:57:45.326562   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key: {Name:mk7f9c8a4c6aa499871b9931a564a1de69bc0566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.326651   16875 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.key
	I1212 19:57:45.326664   16875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt with IP's: []
	I1212 19:57:45.519830   16875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt ...
	I1212 19:57:45.519860   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: {Name:mk0efaf4dc931625415476d0896496b2bde49b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.520006   16875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.key ...
	I1212 19:57:45.520016   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.key: {Name:mk3c5ed544887a8ca0ba86d8fe7b7e3944f4283e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.520075   16875 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.key.c05e0d2e
	I1212 19:57:45.520091   16875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.crt.c05e0d2e with IP's: [192.168.39.145 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 19:57:45.575461   16875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.crt.c05e0d2e ...
	I1212 19:57:45.575489   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.crt.c05e0d2e: {Name:mk7be7f422bb06b0e03fff7878d59296d1c2ee39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.575630   16875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.key.c05e0d2e ...
	I1212 19:57:45.575642   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.key.c05e0d2e: {Name:mk590cfeb5118c28c22b48e0c8b351071d329ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.575709   16875 certs.go:337] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.crt.c05e0d2e -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.crt
	I1212 19:57:45.575792   16875 certs.go:341] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.key.c05e0d2e -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.key
	I1212 19:57:45.575842   16875 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.key
	I1212 19:57:45.575857   16875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.crt with IP's: []
	I1212 19:57:45.661598   16875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.crt ...
	I1212 19:57:45.661628   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.crt: {Name:mk2926b0c9d1ae57d0b8fa78507c5d9ae6f6a843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.661792   16875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.key ...
	I1212 19:57:45.661808   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.key: {Name:mk8d58ab53bbd4a6bff02edbb19bd003ddc369e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:57:45.661962   16875 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 19:57:45.662000   16875 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 19:57:45.662030   16875 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:57:45.662055   16875 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 19:57:45.662650   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 19:57:45.687046   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:57:45.709429   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:57:45.734231   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 19:57:45.759506   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:57:45.782911   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:57:45.806259   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:57:45.829372   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:57:45.853009   16875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:57:45.877444   16875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:57:45.894069   16875 ssh_runner.go:195] Run: openssl version
	I1212 19:57:45.900384   16875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 19:57:45.910253   16875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:45.914978   16875 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:45.915063   16875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:57:45.920798   16875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 19:57:45.930992   16875 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 19:57:45.935722   16875 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 19:57:45.935786   16875 kubeadm.go:404] StartCluster: {Name:addons-459174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-459174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:57:45.935878   16875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:57:45.935929   16875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:57:45.975606   16875 cri.go:89] found id: ""
	I1212 19:57:45.975687   16875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:57:45.984945   16875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:57:45.994350   16875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:57:46.003206   16875 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:57:46.003263   16875 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 19:57:46.219144   16875 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:57:58.835348   16875 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 19:57:58.835421   16875 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 19:57:58.835538   16875 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:57:58.835678   16875 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:57:58.835809   16875 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 19:57:58.835908   16875 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:57:58.837762   16875 out.go:204]   - Generating certificates and keys ...
	I1212 19:57:58.837864   16875 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 19:57:58.837960   16875 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 19:57:58.838055   16875 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 19:57:58.838138   16875 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 19:57:58.838228   16875 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 19:57:58.838309   16875 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 19:57:58.838400   16875 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 19:57:58.838550   16875 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-459174 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I1212 19:57:58.838710   16875 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 19:57:58.838897   16875 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-459174 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I1212 19:57:58.839010   16875 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 19:57:58.839120   16875 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 19:57:58.839195   16875 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 19:57:58.839287   16875 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:57:58.839369   16875 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:57:58.839443   16875 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:57:58.839532   16875 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:57:58.839581   16875 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:57:58.839680   16875 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:57:58.839809   16875 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:57:58.841510   16875 out.go:204]   - Booting up control plane ...
	I1212 19:57:58.841628   16875 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:57:58.841758   16875 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:57:58.841858   16875 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:57:58.841968   16875 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:57:58.842105   16875 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:57:58.842148   16875 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 19:57:58.842270   16875 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 19:57:58.842348   16875 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002102 seconds
	I1212 19:57:58.842469   16875 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 19:57:58.842576   16875 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 19:57:58.842624   16875 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 19:57:58.842818   16875 kubeadm.go:322] [mark-control-plane] Marking the node addons-459174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 19:57:58.842906   16875 kubeadm.go:322] [bootstrap-token] Using token: g51m82.60dx8zry8lgwujbl
	I1212 19:57:58.846536   16875 out.go:204]   - Configuring RBAC rules ...
	I1212 19:57:58.846683   16875 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 19:57:58.846786   16875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 19:57:58.846948   16875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 19:57:58.847122   16875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 19:57:58.847291   16875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 19:57:58.847415   16875 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 19:57:58.847554   16875 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 19:57:58.847599   16875 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 19:57:58.847641   16875 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 19:57:58.847650   16875 kubeadm.go:322] 
	I1212 19:57:58.847749   16875 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 19:57:58.847766   16875 kubeadm.go:322] 
	I1212 19:57:58.847868   16875 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 19:57:58.847877   16875 kubeadm.go:322] 
	I1212 19:57:58.847903   16875 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 19:57:58.847952   16875 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 19:57:58.847995   16875 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 19:57:58.848011   16875 kubeadm.go:322] 
	I1212 19:57:58.848076   16875 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 19:57:58.848083   16875 kubeadm.go:322] 
	I1212 19:57:58.848136   16875 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 19:57:58.848143   16875 kubeadm.go:322] 
	I1212 19:57:58.848198   16875 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 19:57:58.848305   16875 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 19:57:58.848403   16875 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 19:57:58.848413   16875 kubeadm.go:322] 
	I1212 19:57:58.848479   16875 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 19:57:58.848561   16875 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 19:57:58.848568   16875 kubeadm.go:322] 
	I1212 19:57:58.848657   16875 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g51m82.60dx8zry8lgwujbl \
	I1212 19:57:58.848770   16875 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 19:57:58.848790   16875 kubeadm.go:322] 	--control-plane 
	I1212 19:57:58.848796   16875 kubeadm.go:322] 
	I1212 19:57:58.848888   16875 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 19:57:58.848898   16875 kubeadm.go:322] 
	I1212 19:57:58.849010   16875 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g51m82.60dx8zry8lgwujbl \
	I1212 19:57:58.849152   16875 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 19:57:58.849167   16875 cni.go:84] Creating CNI manager for ""
	I1212 19:57:58.849178   16875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:57:58.851075   16875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 19:57:58.852598   16875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 19:57:58.884659   16875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 19:57:58.945477   16875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 19:57:58.945542   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:57:58.945587   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=addons-459174 minikube.k8s.io/updated_at=2023_12_12T19_57_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:57:59.140640   16875 ops.go:34] apiserver oom_adj: -16
	I1212 19:57:59.140680   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:57:59.246242   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:57:59.840543   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:00.340759   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:00.840870   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:01.340801   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:01.840661   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:02.340103   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:02.840878   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:03.340526   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:03.840324   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:04.340064   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:04.840313   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:05.340835   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:05.840944   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:06.340611   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:06.840664   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:07.340006   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:07.840069   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:08.340239   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:08.840828   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:09.340873   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:09.840892   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:10.340066   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:10.839956   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:11.339995   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:11.840106   16875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:58:11.939882   16875 kubeadm.go:1088] duration metric: took 12.994389576s to wait for elevateKubeSystemPrivileges.
	I1212 19:58:11.939916   16875 kubeadm.go:406] StartCluster complete in 26.004138046s
	I1212 19:58:11.939938   16875 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:58:11.940069   16875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 19:58:11.940516   16875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:58:11.940771   16875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 19:58:11.940776   16875 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 19:58:11.940895   16875 addons.go:69] Setting default-storageclass=true in profile "addons-459174"
	I1212 19:58:11.940909   16875 addons.go:69] Setting metrics-server=true in profile "addons-459174"
	I1212 19:58:11.940930   16875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-459174"
	I1212 19:58:11.940950   16875 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-459174"
	I1212 19:58:11.940978   16875 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-459174"
	I1212 19:58:11.940989   16875 addons.go:69] Setting inspektor-gadget=true in profile "addons-459174"
	I1212 19:58:11.941023   16875 addons.go:69] Setting registry=true in profile "addons-459174"
	I1212 19:58:11.941027   16875 addons.go:231] Setting addon inspektor-gadget=true in "addons-459174"
	I1212 19:58:11.941005   16875 addons.go:69] Setting gcp-auth=true in profile "addons-459174"
	I1212 19:58:11.941041   16875 addons.go:231] Setting addon registry=true in "addons-459174"
	I1212 19:58:11.941025   16875 addons.go:69] Setting storage-provisioner=true in profile "addons-459174"
	I1212 19:58:11.941017   16875 addons.go:69] Setting helm-tiller=true in profile "addons-459174"
	I1212 19:58:11.941062   16875 addons.go:69] Setting ingress=true in profile "addons-459174"
	I1212 19:58:11.941070   16875 mustload.go:65] Loading cluster: addons-459174
	I1212 19:58:11.941075   16875 addons.go:231] Setting addon helm-tiller=true in "addons-459174"
	I1212 19:58:11.941084   16875 addons.go:231] Setting addon ingress=true in "addons-459174"
	I1212 19:58:11.941094   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.941109   16875 addons.go:69] Setting ingress-dns=true in profile "addons-459174"
	I1212 19:58:11.941124   16875 addons.go:231] Setting addon ingress-dns=true in "addons-459174"
	I1212 19:58:11.941130   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.941130   16875 addons.go:231] Setting addon storage-provisioner=true in "addons-459174"
	I1212 19:58:11.941159   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.941165   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.941218   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.941300   16875 config.go:182] Loaded profile config "addons-459174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 19:58:11.941510   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.941520   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.941531   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.941540   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.941552   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.941583   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.941584   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.941007   16875 config.go:182] Loaded profile config "addons-459174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 19:58:11.941607   16875 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-459174"
	I1212 19:58:11.941912   16875 addons.go:69] Setting cloud-spanner=true in profile "addons-459174"
	I1212 19:58:11.941962   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.941975   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.941998   16875 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-459174"
	I1212 19:58:11.941050   16875 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-459174"
	I1212 19:58:11.942041   16875 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-459174"
	I1212 19:58:11.941041   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.940933   16875 addons.go:231] Setting addon metrics-server=true in "addons-459174"
	I1212 19:58:11.942325   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.942529   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.942588   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.942669   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.942729   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.942756   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.941097   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.943008   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.943034   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.943103   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.943117   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.941968   16875 addons.go:231] Setting addon cloud-spanner=true in "addons-459174"
	I1212 19:58:11.940897   16875 addons.go:69] Setting volumesnapshots=true in profile "addons-459174"
	I1212 19:58:11.943166   16875 addons.go:231] Setting addon volumesnapshots=true in "addons-459174"
	I1212 19:58:11.943164   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.943194   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.943360   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.943392   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.943564   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.943612   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.944060   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.944069   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.944088   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.944093   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.944308   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.945091   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.945114   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.963118   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I1212 19:58:11.963582   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.963724   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I1212 19:58:11.963905   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I1212 19:58:11.964095   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.964347   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.964368   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.964525   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.964537   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.964608   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.964695   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.965554   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.965569   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.965637   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.965676   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:11.965714   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I1212 19:58:11.966181   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.966213   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.972518   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.972666   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.972713   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I1212 19:58:11.972828   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
	I1212 19:58:11.973689   16875 addons.go:231] Setting addon default-storageclass=true in "addons-459174"
	I1212 19:58:11.973731   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.974091   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.974123   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.974943   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.974958   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.975425   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.975460   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.976106   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.976168   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.976859   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.976879   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.977413   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.977437   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.981963   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 19:58:11.981971   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.982040   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.983005   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:11.983103   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.983122   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.983599   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.983627   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.983900   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.984097   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:11.984128   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:11.984352   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:11.984421   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:11.985094   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.985144   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:11.986615   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:11.987103   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:11.987128   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.006642   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I1212 19:58:12.007168   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.007712   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.007734   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.008125   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.008334   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.009235   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42329
	I1212 19:58:12.009796   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.010322   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.010342   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.010526   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I1212 19:58:12.010673   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.010673   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.011280   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.011308   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.011579   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.013468   16875 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 19:58:12.012098   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.014856   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.016256   16875 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 19:58:12.015296   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.016711   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I1212 19:58:12.017918   16875 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 19:58:12.017936   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 19:58:12.017960   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.018467   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.018866   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I1212 19:58:12.019140   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.019153   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.019590   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.019611   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.020090   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.020604   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.020644   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.020837   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I1212 19:58:12.021171   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.021278   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.021308   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.022299   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.022321   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.022395   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.022455   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.022473   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.022535   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.022552   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.022770   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.022778   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.022909   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.022910   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.023111   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.023762   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.025535   16875 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-459174"
	I1212 19:58:12.025577   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:12.025957   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.026003   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.029484   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I1212 19:58:12.029752   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.031776   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.032043   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39501
	I1212 19:58:12.034042   16875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 19:58:12.032379   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.032605   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.033159   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I1212 19:58:12.036979   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34821
	I1212 19:58:12.036994   16875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 19:58:12.035967   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.036314   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.036843   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.037690   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.037736   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I1212 19:58:12.040230   16875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 19:58:12.041722   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1212 19:58:12.040248   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.041954   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.041967   16875 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:58:12.041985   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 19:58:12.038889   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.042005   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.039337   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.039781   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.042069   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.038850   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.042095   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.042284   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.042521   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.042602   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.043034   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.043052   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.043114   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.043130   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.043173   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.043436   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.043503   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.043503   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.043635   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.043648   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.044079   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.044221   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.044283   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.044564   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.044604   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.044819   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.044847   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.046657   16875 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 19:58:12.045542   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.046392   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I1212 19:58:12.047365   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.047390   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.048079   16875 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:58:12.048091   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 19:58:12.048107   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.048396   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.048420   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.049827   16875 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 19:58:12.048772   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.049360   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I1212 19:58:12.049640   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.051128   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.051178   16875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:58:12.053047   16875 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:12.051225   16875 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 19:58:12.051598   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.051611   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.051762   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.051819   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.052344   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.053081   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 19:58:12.053089   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.053100   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.053142   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:58:12.053154   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.053172   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.052378   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I1212 19:58:12.054842   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.054885   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.054961   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.054971   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37859
	I1212 19:58:12.054978   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.055040   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.055099   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.055696   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.055823   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.055839   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.056239   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.056280   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.056310   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.056311   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.056505   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.056518   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.056625   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.056689   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.057365   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.057957   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.057973   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.058891   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.058992   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.061350   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 19:58:12.059599   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:12.060983   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I1212 19:58:12.061123   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.062390   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.063181   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.063195   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:12.063202   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.064650   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 19:58:12.062538   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.062976   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.063391   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.063839   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.063971   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36823
	I1212 19:58:12.067354   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 19:58:12.066132   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.066299   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.066366   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.066531   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.066673   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.068738   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.070277   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 19:58:12.069063   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.069088   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.069191   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.069429   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.072256   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.073802   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 19:58:12.072707   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.072826   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.073039   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.073048   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.074396   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44643
	I1212 19:58:12.077303   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 19:58:12.075946   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.076423   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.077243   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.080387   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 19:58:12.079169   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I1212 19:58:12.080161   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.080634   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.081635   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39729
	I1212 19:58:12.082500   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.083299   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.083314   16875 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 19:58:12.083866   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.084735   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 19:58:12.085887   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.087454   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.085903   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I1212 19:58:12.085917   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.089259   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 19:58:12.089276   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 19:58:12.089292   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.085939   16875 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 19:58:12.086206   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I1212 19:58:12.086325   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.087426   16875 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:58:12.087775   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.087778   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.087925   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.090697   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.090740   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 19:58:12.090789   16875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 19:58:12.090799   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 19:58:12.090938   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.090964   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.091538   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.091767   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.091787   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.091803   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:12.092192   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.092355   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.092464   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.093090   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.093194   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:12.093215   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:12.093618   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:12.094028   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.094662   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.094850   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.094871   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.096555   16875 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 19:58:12.095323   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.095576   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.095831   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.095836   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:12.096021   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.096178   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.096534   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.096605   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.097243   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.098172   16875 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 19:58:12.098195   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 19:58:12.098209   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.098276   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.098299   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.098319   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.098338   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.098812   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.098828   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.100876   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:12.100896   16875 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1212 19:58:12.098853   16875 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.098874   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.098974   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.099139   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.101157   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.101865   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.102397   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:58:12.102509   16875 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1212 19:58:12.102681   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.103791   16875 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 19:58:12.103896   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.103985   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.104005   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.105523   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1212 19:58:12.105544   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.105582   16875 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 19:58:12.105660   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.105854   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.105904   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.106023   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.107134   16875 out.go:177]   - Using image docker.io/busybox:stable
	I1212 19:58:12.108866   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.108868   16875 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:58:12.108886   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 19:58:12.108896   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.107403   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.107257   16875 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 19:58:12.108956   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 19:58:12.108974   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:12.109483   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.109964   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.109986   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.110191   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.110400   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.110617   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.110665   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.110787   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.111232   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.111280   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.111521   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.111727   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.111885   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.111922   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.112031   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.112302   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.112326   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.112470   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.112616   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.112737   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.112852   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:12.113161   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.113534   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:12.113558   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:12.113785   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:12.113937   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:12.114075   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:12.114164   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	W1212 19:58:12.114915   16875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:58:12.114933   16875 retry.go:31] will retry after 243.041519ms: ssh: handshake failed: EOF
	I1212 19:58:12.151201   16875 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-459174" context rescaled to 1 replicas
	I1212 19:58:12.151266   16875 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:58:12.153420   16875 out.go:177] * Verifying Kubernetes components...
	I1212 19:58:12.155100   16875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:58:12.263088   16875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 19:58:12.263751   16875 node_ready.go:35] waiting up to 6m0s for node "addons-459174" to be "Ready" ...
	I1212 19:58:12.280287   16875 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 19:58:12.280308   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 19:58:12.299714   16875 node_ready.go:49] node "addons-459174" has status "Ready":"True"
	I1212 19:58:12.299752   16875 node_ready.go:38] duration metric: took 35.968881ms waiting for node "addons-459174" to be "Ready" ...
	I1212 19:58:12.299766   16875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 19:58:12.336281   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:58:12.356248   16875 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 19:58:12.356280   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 19:58:12.412878   16875 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:58:12.412910   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 19:58:12.447753   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:58:12.461071   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:58:12.484095   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:58:12.490569   16875 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 19:58:12.490593   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 19:58:12.490717   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:58:12.503546   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 19:58:12.528527   16875 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 19:58:12.528550   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 19:58:12.538451   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 19:58:12.538473   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 19:58:12.554271   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:58:12.554521   16875 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1212 19:58:12.554544   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1212 19:58:12.559739   16875 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 19:58:12.559766   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 19:58:12.588684   16875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:12.675323   16875 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 19:58:12.675350   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 19:58:12.794044   16875 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 19:58:12.794068   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 19:58:12.810687   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:58:12.819997   16875 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:58:12.820029   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 19:58:12.832662   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 19:58:12.832687   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 19:58:12.866999   16875 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 19:58:12.867027   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1212 19:58:12.871998   16875 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 19:58:12.872023   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 19:58:12.962279   16875 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 19:58:12.962307   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 19:58:12.976246   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:58:13.007299   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 19:58:13.007327   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 19:58:13.198024   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 19:58:13.198047   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 19:58:13.212055   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 19:58:13.238943   16875 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 19:58:13.238975   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 19:58:13.246018   16875 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:58:13.246041   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 19:58:13.274787   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 19:58:13.274811   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 19:58:13.313771   16875 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 19:58:13.313794   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 19:58:13.336867   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:58:13.354687   16875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 19:58:13.354724   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 19:58:13.388134   16875 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 19:58:13.388158   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 19:58:13.411046   16875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 19:58:13.411067   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 19:58:13.463110   16875 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 19:58:13.463132   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 19:58:13.475015   16875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 19:58:13.475038   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 19:58:13.570854   16875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 19:58:13.570875   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 19:58:13.583296   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 19:58:13.731088   16875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 19:58:13.731116   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 19:58:13.775892   16875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:58:13.775918   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 19:58:13.817000   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:58:15.569454   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:16.320453   16875 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.057323364s)
	I1212 19:58:16.320484   16875 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 19:58:17.913321   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:19.784962   16875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 19:58:19.784997   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:19.788251   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:19.788749   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:19.788793   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:19.789018   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:19.789237   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:19.789393   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:19.789569   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:20.061820   16875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 19:58:20.229807   16875 addons.go:231] Setting addon gcp-auth=true in "addons-459174"
	I1212 19:58:20.229858   16875 host.go:66] Checking if "addons-459174" exists ...
	I1212 19:58:20.230162   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:20.230197   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:20.258434   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I1212 19:58:20.258818   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:20.259249   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:20.259284   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:20.259595   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:20.260174   16875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 19:58:20.260210   16875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 19:58:20.274839   16875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I1212 19:58:20.275310   16875 main.go:141] libmachine: () Calling .GetVersion
	I1212 19:58:20.275806   16875 main.go:141] libmachine: Using API Version  1
	I1212 19:58:20.275833   16875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 19:58:20.276216   16875 main.go:141] libmachine: () Calling .GetMachineName
	I1212 19:58:20.276397   16875 main.go:141] libmachine: (addons-459174) Calling .GetState
	I1212 19:58:20.278050   16875 main.go:141] libmachine: (addons-459174) Calling .DriverName
	I1212 19:58:20.278276   16875 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 19:58:20.278297   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHHostname
	I1212 19:58:20.280695   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:20.281042   16875 main.go:141] libmachine: (addons-459174) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:fb:c5", ip: ""} in network mk-addons-459174: {Iface:virbr1 ExpiryTime:2023-12-12 20:57:27 +0000 UTC Type:0 Mac:52:54:00:e7:fb:c5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:addons-459174 Clientid:01:52:54:00:e7:fb:c5}
	I1212 19:58:20.281077   16875 main.go:141] libmachine: (addons-459174) DBG | domain addons-459174 has defined IP address 192.168.39.145 and MAC address 52:54:00:e7:fb:c5 in network mk-addons-459174
	I1212 19:58:20.281242   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHPort
	I1212 19:58:20.281415   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHKeyPath
	I1212 19:58:20.281579   16875 main.go:141] libmachine: (addons-459174) Calling .GetSSHUsername
	I1212 19:58:20.281729   16875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/addons-459174/id_rsa Username:docker}
	I1212 19:58:20.381936   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:21.837691   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.501366913s)
	I1212 19:58:21.837745   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.837745   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.389947119s)
	I1212 19:58:21.837757   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.837786   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.837802   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.837849   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.376753669s)
	I1212 19:58:21.837883   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.837901   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.837922   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.347186582s)
	I1212 19:58:21.837880   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.353756183s)
	I1212 19:58:21.837967   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.837975   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.837983   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.334413495s)
	I1212 19:58:21.837998   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838007   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.837950   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838019   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.838075   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.283774331s)
	I1212 19:58:21.838091   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838099   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.838194   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.02747338s)
	I1212 19:58:21.838210   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838218   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.838304   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.862026854s)
	I1212 19:58:21.838317   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838328   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.838394   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.62630925s)
	I1212 19:58:21.838408   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838416   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.838546   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.501645511s)
	W1212 19:58:21.838570   16875 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:58:21.838588   16875 retry.go:31] will retry after 272.315044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:58:21.838668   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.255325441s)
	I1212 19:58:21.838684   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.838694   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841425   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841431   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841440   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841452   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841460   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841464   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841479   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841506   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841514   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841523   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841532   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841529   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841540   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841555   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841564   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841574   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841578   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841583   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841598   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841607   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841615   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841623   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841637   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841659   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841666   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841669   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841680   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841686   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841688   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841695   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841697   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841705   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841713   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841743   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841752   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841754   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841760   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841769   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841833   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841865   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841874   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841879   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841885   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.841895   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.841949   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841969   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841971   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.841979   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841981   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.841990   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.842002   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.842032   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.841515   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.842118   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.842126   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.842183   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842208   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842216   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.842275   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842304   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842311   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.842538   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842565   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842578   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.842613   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842622   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.842631   16875 addons.go:467] Verifying addon ingress=true in "addons-459174"
	I1212 19:58:21.844606   16875 out.go:177] * Verifying ingress addon...
	I1212 19:58:21.842718   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842739   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842759   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842780   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842799   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842818   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842836   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.842881   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.842895   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.845381   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.845396   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.845985   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.845998   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.846000   16875 addons.go:467] Verifying addon metrics-server=true in "addons-459174"
	I1212 19:58:21.846066   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.846160   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.846177   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.846187   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.846220   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.845987   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.846512   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.846535   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:21.846544   16875 addons.go:467] Verifying addon registry=true in "addons-459174"
	I1212 19:58:21.848909   16875 out.go:177] * Verifying registry addon...
	I1212 19:58:21.846513   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:21.846797   16875 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 19:58:21.851816   16875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 19:58:21.887172   16875 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 19:58:21.887193   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:21.887442   16875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 19:58:21.887465   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:21.924160   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.924187   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.924505   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.924527   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	W1212 19:58:21.924617   16875 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 19:58:21.947598   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:21.957041   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:21.965893   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:21.965919   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:21.966295   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:21.966317   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:22.111307   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:58:22.510868   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:22.522975   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:22.684864   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:22.976133   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:22.980921   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:23.015899   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.19884622s)
	I1212 19:58:23.015941   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:23.015950   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:23.015971   16875 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.737669477s)
	I1212 19:58:23.018302   16875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 19:58:23.016294   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:23.016320   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:23.019991   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:23.020014   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:23.020031   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:23.021906   16875 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 19:58:23.020319   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:23.020330   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:23.023404   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:23.023425   16875 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-459174"
	I1212 19:58:23.023438   16875 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 19:58:23.023458   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 19:58:23.025255   16875 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 19:58:23.029788   16875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 19:58:23.046388   16875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 19:58:23.046417   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:23.058870   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:23.252877   16875 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 19:58:23.252903   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 19:58:23.358717   16875 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:58:23.358738   16875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 19:58:23.463684   16875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:58:23.482453   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:23.483559   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:23.575672   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:23.957151   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:23.969042   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:24.070904   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:24.490940   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.379579181s)
	I1212 19:58:24.490984   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:24.491000   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:24.491370   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:24.491388   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:24.491399   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:24.491407   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:24.491657   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:24.491684   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:24.491704   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:24.495080   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:24.495123   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:24.567764   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:24.986971   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:24.987349   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:25.162174   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:25.182234   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:25.251935   16875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.788206089s)
	I1212 19:58:25.251986   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:25.251999   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:25.252294   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:25.252317   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:25.252329   16875 main.go:141] libmachine: Making call to close driver server
	I1212 19:58:25.252353   16875 main.go:141] libmachine: (addons-459174) Calling .Close
	I1212 19:58:25.252580   16875 main.go:141] libmachine: Successfully made call to close driver server
	I1212 19:58:25.252624   16875 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 19:58:25.252600   16875 main.go:141] libmachine: (addons-459174) DBG | Closing plugin on server side
	I1212 19:58:25.254401   16875 addons.go:467] Verifying addon gcp-auth=true in "addons-459174"
	I1212 19:58:25.256575   16875 out.go:177] * Verifying gcp-auth addon...
	I1212 19:58:25.258657   16875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 19:58:25.283579   16875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 19:58:25.283601   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:25.309579   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:25.455704   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:25.463521   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:25.567679   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:25.814214   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:25.952570   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:25.965345   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:26.065165   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:26.319194   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:26.477187   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:26.479757   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:26.572111   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:26.815209   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:26.953450   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:26.963385   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:27.065383   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:27.313650   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:27.453988   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:27.461890   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:27.576355   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:27.634044   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:27.814141   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:27.970567   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:27.971164   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:28.080867   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:28.321722   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:28.453169   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:28.462049   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:28.595935   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:28.814671   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:28.953078   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:28.963208   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:29.066432   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:29.317097   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:29.452431   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:29.472399   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:29.566425   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:29.671207   16875 pod_ready.go:102] pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace has status "Ready":"False"
	I1212 19:58:29.818535   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:29.975456   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:29.982123   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:30.075400   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:30.313963   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:30.455998   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:30.465212   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:30.568219   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:30.631632   16875 pod_ready.go:97] error getting pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-swgbh" not found
	I1212 19:58:30.631658   16875 pod_ready.go:81] duration metric: took 18.042946101s waiting for pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace to be "Ready" ...
	E1212 19:58:30.631668   16875 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-swgbh" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-swgbh" not found
	I1212 19:58:30.631673   16875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.641088   16875 pod_ready.go:92] pod "etcd-addons-459174" in "kube-system" namespace has status "Ready":"True"
	I1212 19:58:30.641108   16875 pod_ready.go:81] duration metric: took 9.429232ms waiting for pod "etcd-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.641129   16875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.646486   16875 pod_ready.go:92] pod "kube-apiserver-addons-459174" in "kube-system" namespace has status "Ready":"True"
	I1212 19:58:30.646502   16875 pod_ready.go:81] duration metric: took 5.367365ms waiting for pod "kube-apiserver-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.646510   16875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.660210   16875 pod_ready.go:92] pod "kube-controller-manager-addons-459174" in "kube-system" namespace has status "Ready":"True"
	I1212 19:58:30.660245   16875 pod_ready.go:81] duration metric: took 13.728916ms waiting for pod "kube-controller-manager-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.660254   16875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-knjxm" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:30.814093   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:30.957535   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:30.964715   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:31.080627   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:31.315920   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:31.456498   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:31.475159   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:31.572078   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:31.818044   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:31.842146   16875 pod_ready.go:92] pod "kube-proxy-knjxm" in "kube-system" namespace has status "Ready":"True"
	I1212 19:58:31.842180   16875 pod_ready.go:81] duration metric: took 1.181916583s waiting for pod "kube-proxy-knjxm" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:31.842193   16875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:31.963433   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:31.975321   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:32.031652   16875 pod_ready.go:92] pod "kube-scheduler-addons-459174" in "kube-system" namespace has status "Ready":"True"
	I1212 19:58:32.031673   16875 pod_ready.go:81] duration metric: took 189.472643ms waiting for pod "kube-scheduler-addons-459174" in "kube-system" namespace to be "Ready" ...
	I1212 19:58:32.031683   16875 pod_ready.go:38] duration metric: took 19.73190378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 19:58:32.031701   16875 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:58:32.031750   16875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:32.065560   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:32.112727   16875 api_server.go:72] duration metric: took 19.96142514s to wait for apiserver process to appear ...
	I1212 19:58:32.112751   16875 api_server.go:88] waiting for apiserver healthz status ...
	I1212 19:58:32.112775   16875 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I1212 19:58:32.124053   16875 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I1212 19:58:32.125975   16875 api_server.go:141] control plane version: v1.28.4
	I1212 19:58:32.126002   16875 api_server.go:131] duration metric: took 13.243289ms to wait for apiserver health ...
	I1212 19:58:32.126012   16875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 19:58:32.263477   16875 system_pods.go:59] 18 kube-system pods found
	I1212 19:58:32.263505   16875 system_pods.go:61] "coredns-5dd5756b68-xrvs4" [d56dc530-98d7-40ee-8cff-c514caeb8beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:58:32.263512   16875 system_pods.go:61] "csi-hostpath-attacher-0" [7302a26b-2cf1-46c9-86f6-bc5a48b7f3c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:58:32.263521   16875 system_pods.go:61] "csi-hostpath-resizer-0" [96c09baf-df08-47c4-8171-db3903d25f30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:58:32.263528   16875 system_pods.go:61] "csi-hostpathplugin-8tsrv" [0bd0b35a-6889-48c9-82ab-c990ff145810] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:58:32.263533   16875 system_pods.go:61] "etcd-addons-459174" [ae782f24-072a-46ff-b698-ffd0967f82a0] Running
	I1212 19:58:32.263540   16875 system_pods.go:61] "kube-apiserver-addons-459174" [0b66fb10-bc72-4062-bb64-44050c4a33ff] Running
	I1212 19:58:32.263544   16875 system_pods.go:61] "kube-controller-manager-addons-459174" [b1be52d8-3a25-4b12-b9ec-42a887614e4b] Running
	I1212 19:58:32.263552   16875 system_pods.go:61] "kube-ingress-dns-minikube" [d27e0916-21ec-47ef-865c-9cdf082b4dec] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:58:32.263562   16875 system_pods.go:61] "kube-proxy-knjxm" [88ec6c7b-171a-4cd6-b6c6-4a2c18baaf3b] Running
	I1212 19:58:32.263569   16875 system_pods.go:61] "kube-scheduler-addons-459174" [64693ab0-0e10-4e94-bf78-12cc8e87e29a] Running
	I1212 19:58:32.263578   16875 system_pods.go:61] "metrics-server-7c66d45ddc-8kvhh" [07e76411-9144-446a-9e56-c452110150e9] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:58:32.263592   16875 system_pods.go:61] "nvidia-device-plugin-daemonset-d5dnz" [934d08ef-405c-4c17-b5cd-ad3ab38cab88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:58:32.263610   16875 system_pods.go:61] "registry-proxy-xfflw" [318f5bf5-ed29-48d0-83db-7941bc942aee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:58:32.263618   16875 system_pods.go:61] "registry-qhjd2" [354858fb-09b5-436c-abc2-09d0c29c3561] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:58:32.263625   16875 system_pods.go:61] "snapshot-controller-58dbcc7b99-fxzgh" [03205717-3dcb-4277-aea4-01f8a7d1398a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:58:32.263633   16875 system_pods.go:61] "snapshot-controller-58dbcc7b99-gbmr2" [0c83d0e9-c10f-4256-bb66-21a092159350] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:58:32.263643   16875 system_pods.go:61] "storage-provisioner" [a1d0e927-735b-402f-abc3-2ca928e96a63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:58:32.263649   16875 system_pods.go:61] "tiller-deploy-7b677967b9-gfg56" [4712f730-1a01-40a5-9285-e1d920fd46c2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1212 19:58:32.263656   16875 system_pods.go:74] duration metric: took 137.637466ms to wait for pod list to return data ...
	I1212 19:58:32.263671   16875 default_sa.go:34] waiting for default service account to be created ...
	I1212 19:58:32.314157   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:32.432116   16875 default_sa.go:45] found service account: "default"
	I1212 19:58:32.432147   16875 default_sa.go:55] duration metric: took 168.465903ms for default service account to be created ...
	I1212 19:58:32.432161   16875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 19:58:32.452736   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:32.466965   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:32.567209   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:32.676417   16875 system_pods.go:86] 18 kube-system pods found
	I1212 19:58:32.676466   16875 system_pods.go:89] "coredns-5dd5756b68-xrvs4" [d56dc530-98d7-40ee-8cff-c514caeb8beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:58:32.676478   16875 system_pods.go:89] "csi-hostpath-attacher-0" [7302a26b-2cf1-46c9-86f6-bc5a48b7f3c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:58:32.676489   16875 system_pods.go:89] "csi-hostpath-resizer-0" [96c09baf-df08-47c4-8171-db3903d25f30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:58:32.676500   16875 system_pods.go:89] "csi-hostpathplugin-8tsrv" [0bd0b35a-6889-48c9-82ab-c990ff145810] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:58:32.676513   16875 system_pods.go:89] "etcd-addons-459174" [ae782f24-072a-46ff-b698-ffd0967f82a0] Running
	I1212 19:58:32.676521   16875 system_pods.go:89] "kube-apiserver-addons-459174" [0b66fb10-bc72-4062-bb64-44050c4a33ff] Running
	I1212 19:58:32.676529   16875 system_pods.go:89] "kube-controller-manager-addons-459174" [b1be52d8-3a25-4b12-b9ec-42a887614e4b] Running
	I1212 19:58:32.676565   16875 system_pods.go:89] "kube-ingress-dns-minikube" [d27e0916-21ec-47ef-865c-9cdf082b4dec] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:58:32.676577   16875 system_pods.go:89] "kube-proxy-knjxm" [88ec6c7b-171a-4cd6-b6c6-4a2c18baaf3b] Running
	I1212 19:58:32.676586   16875 system_pods.go:89] "kube-scheduler-addons-459174" [64693ab0-0e10-4e94-bf78-12cc8e87e29a] Running
	I1212 19:58:32.676597   16875 system_pods.go:89] "metrics-server-7c66d45ddc-8kvhh" [07e76411-9144-446a-9e56-c452110150e9] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:58:32.676617   16875 system_pods.go:89] "nvidia-device-plugin-daemonset-d5dnz" [934d08ef-405c-4c17-b5cd-ad3ab38cab88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:58:32.676633   16875 system_pods.go:89] "registry-proxy-xfflw" [318f5bf5-ed29-48d0-83db-7941bc942aee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:58:32.676646   16875 system_pods.go:89] "registry-qhjd2" [354858fb-09b5-436c-abc2-09d0c29c3561] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:58:32.676661   16875 system_pods.go:89] "snapshot-controller-58dbcc7b99-fxzgh" [03205717-3dcb-4277-aea4-01f8a7d1398a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:58:32.676676   16875 system_pods.go:89] "snapshot-controller-58dbcc7b99-gbmr2" [0c83d0e9-c10f-4256-bb66-21a092159350] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:58:32.676691   16875 system_pods.go:89] "storage-provisioner" [a1d0e927-735b-402f-abc3-2ca928e96a63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:58:32.676701   16875 system_pods.go:89] "tiller-deploy-7b677967b9-gfg56" [4712f730-1a01-40a5-9285-e1d920fd46c2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1212 19:58:32.676718   16875 system_pods.go:126] duration metric: took 244.548632ms to wait for k8s-apps to be running ...
	I1212 19:58:32.676731   16875 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 19:58:32.676788   16875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:58:32.738708   16875 system_svc.go:56] duration metric: took 61.967287ms WaitForService to wait for kubelet.
	I1212 19:58:32.738737   16875 kubeadm.go:581] duration metric: took 20.587441442s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 19:58:32.738758   16875 node_conditions.go:102] verifying NodePressure condition ...
	I1212 19:58:33.208978   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:33.209208   16875 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 19:58:33.209262   16875 node_conditions.go:123] node cpu capacity is 2
	I1212 19:58:33.209279   16875 node_conditions.go:105] duration metric: took 470.515765ms to run NodePressure ...
	I1212 19:58:33.209292   16875 start.go:228] waiting for startup goroutines ...
	I1212 19:58:33.209968   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:33.213847   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:33.214088   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:33.314208   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:33.452906   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:33.471251   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:33.565938   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:33.813913   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:33.952406   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:33.964657   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:34.077661   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:34.315210   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:34.453612   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:34.463519   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:34.573860   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:34.817243   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:34.963482   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:34.967036   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:35.066311   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:35.316812   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:35.453134   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:35.463295   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:35.567945   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:35.815131   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:35.956108   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:35.971303   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:36.074838   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:36.318510   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:36.454095   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:36.468515   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:36.566898   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:36.824945   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:36.991777   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:37.003867   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:37.065599   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:37.314241   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:37.458178   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:37.464729   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:37.569021   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:37.815328   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:37.959872   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:37.964466   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:38.077434   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:38.329536   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:38.455535   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:38.464842   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:38.564851   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:38.833203   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:38.953292   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:38.981661   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:39.094143   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:39.313966   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:39.457768   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:39.461880   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:39.573374   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:39.813423   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:39.952740   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:39.962452   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:40.065945   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:40.314181   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:40.453314   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:40.465772   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:40.567368   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:40.814271   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:40.953672   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:40.962013   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:41.065722   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:41.313973   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:41.456444   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:41.464723   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:41.566159   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:41.815262   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:41.953225   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:41.963088   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:42.065590   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:42.314220   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:42.666826   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:42.668605   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:42.675441   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:42.815080   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:42.952837   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:42.962742   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:43.066545   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:43.313803   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:43.455640   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:43.463364   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:43.564439   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:43.814452   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:43.954590   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:43.964469   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:44.065936   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:44.314397   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:44.453799   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:44.463026   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:44.565482   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:44.817468   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:44.952668   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:44.961252   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:45.068415   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:45.315423   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:45.452991   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:45.470844   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:45.582997   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:45.816971   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:45.952639   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:45.962436   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:46.076105   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:46.314529   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:46.452905   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:46.462105   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:46.566233   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:46.814447   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:46.955806   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:46.961811   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:47.068388   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:47.314726   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:47.453071   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:47.462164   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:47.565400   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:47.813555   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:47.953305   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:47.962626   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:48.066014   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:48.314547   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:48.453440   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:48.467148   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:48.570038   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:48.814459   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:48.953395   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:48.962141   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:49.065378   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:49.313596   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:49.452507   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:49.461802   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:49.565325   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:49.814278   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:49.952320   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:49.962881   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:50.066733   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:50.314455   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:50.456295   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:50.464690   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:50.566332   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:50.817368   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:50.953422   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:50.963962   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:51.068221   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:51.313546   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:51.452468   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:51.462911   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:51.565935   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:51.814802   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:51.954227   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:51.962924   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:52.281683   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:52.323154   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:52.453273   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:52.465020   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:52.568664   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:52.814706   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:52.953345   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:52.963351   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:53.068440   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:53.313717   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:53.453664   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:53.464299   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:53.565283   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:53.813942   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:53.952559   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:53.962108   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:54.066087   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:54.314217   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:54.462594   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:54.466266   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:54.564778   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:54.813777   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:54.952085   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:54.961977   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:55.065627   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:55.313709   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:55.453476   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:55.466885   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:55.565692   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:55.814867   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:55.952665   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:55.967440   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:56.064480   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:56.315164   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:56.454780   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:56.461661   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:56.566664   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:56.813431   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:56.952975   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:56.961565   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:57.069988   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:57.315100   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:57.452616   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:57.473135   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:57.594510   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:57.818459   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:57.958253   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:57.964907   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:58.065110   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:58.316948   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:58.452436   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:58.462733   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:58.565358   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:58.815592   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:58.954797   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:58.961731   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:59.065444   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:59.651399   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:59.651430   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:59.652015   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:58:59.658329   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:58:59.825670   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:58:59.958345   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:58:59.962171   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:00.064767   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:00.321276   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:00.452323   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:00.470880   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:00.565970   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:00.814418   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:00.959860   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:00.966851   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:01.066905   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:01.313239   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:01.453577   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:01.462070   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:01.565817   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:01.813787   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:01.953451   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:01.962511   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:02.065074   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:02.319610   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:02.454233   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:02.463967   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:02.566311   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:02.816672   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:02.952642   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:02.966774   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:03.065954   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:03.314359   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:03.453061   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:03.462801   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:03.566457   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:03.814678   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:03.959216   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:03.966859   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:04.066304   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:04.313951   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:04.462893   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:04.463096   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:04.566166   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:04.814278   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:04.953048   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:04.966281   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:05.065601   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:05.313337   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:05.452233   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:05.465980   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:05.566212   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:05.815936   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:05.952664   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:05.963399   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:06.064920   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:06.314060   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:06.461357   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:06.475437   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:06.565337   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:06.814535   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:06.957047   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:06.961766   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:59:07.066813   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:07.324329   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:07.452959   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:07.461715   16875 kapi.go:107] duration metric: took 45.609899068s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 19:59:07.565556   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:07.820168   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:07.951913   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:08.065940   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:08.314341   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:08.452557   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:08.568784   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:08.818178   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:08.952853   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:09.065323   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:09.313578   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:09.458331   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:09.567161   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:09.814490   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:09.956653   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:10.066687   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:10.314381   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:10.452693   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:10.564210   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:10.814281   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:10.953774   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:11.065471   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:11.313969   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:11.455086   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:11.564859   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:12.229905   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:12.230332   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:12.230664   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:12.314078   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:12.452968   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:12.565001   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:12.814174   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:12.957114   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:13.065414   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:13.313810   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:13.453887   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:13.564861   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:13.814090   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:13.952609   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:14.068181   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:14.654154   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:14.654446   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:14.656876   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:14.814690   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:14.952652   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:15.065168   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:15.313518   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:15.452802   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:15.565328   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:15.813557   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:15.979056   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:16.075805   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:16.314992   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:16.453009   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:16.565433   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:16.814661   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:16.953097   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:17.064709   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:17.319463   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:17.460423   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:17.570291   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:17.814077   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:17.952639   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:18.075178   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:18.318873   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:18.456910   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:18.566912   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:18.815031   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:18.954863   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:19.079769   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:19.333262   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:19.495014   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:19.578161   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:19.818405   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:19.960276   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:20.066353   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:20.313519   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:20.453577   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:20.573185   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:20.815632   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:20.958872   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:21.068563   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:21.315981   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:21.452884   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:21.569217   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:21.812976   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:21.952660   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:22.066207   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:22.313704   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:22.453545   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:22.566900   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:22.813531   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:22.957282   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:23.065367   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:23.313888   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:23.452861   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:23.573352   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:23.813893   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:23.953290   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:24.068664   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:24.314867   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:24.452402   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:24.565646   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:24.814832   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:24.954416   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:25.065949   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:25.314119   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:25.454448   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:25.566351   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:25.814385   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:25.953364   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:26.068990   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:26.314933   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:26.454086   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:26.565750   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:26.813537   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:26.953186   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:27.066675   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:27.314005   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:27.453157   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:27.565365   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:27.813875   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:27.952287   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:28.067947   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:59:28.313855   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:28.465729   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:28.576750   16875 kapi.go:107] duration metric: took 1m5.546962807s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 19:59:28.813488   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:28.952483   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:29.313806   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:29.453760   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:29.814789   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:29.952598   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:30.317896   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:30.452690   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:30.819946   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:30.953822   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:31.314839   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:31.453060   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:31.814807   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:31.952787   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:32.314616   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:32.453458   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:32.813799   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:32.953250   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:33.314513   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:33.453061   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:33.813889   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:33.954520   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:34.318474   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:34.452918   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:34.816027   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:34.953553   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:35.315653   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:35.454138   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:35.816352   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:35.952700   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:36.315307   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:36.456041   16875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:59:36.815373   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:36.953102   16875 kapi.go:107] duration metric: took 1m15.106304302s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 19:59:37.313934   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:37.813825   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:38.315180   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:38.818364   16875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:59:39.314816   16875 kapi.go:107] duration metric: took 1m14.056156746s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 19:59:39.316407   16875 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-459174 cluster.
	I1212 19:59:39.317745   16875 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 19:59:39.319152   16875 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 19:59:39.320763   16875 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1212 19:59:39.321999   16875 addons.go:502] enable addons completed in 1m27.381224278s: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin metrics-server helm-tiller inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1212 19:59:39.322039   16875 start.go:233] waiting for cluster config update ...
	I1212 19:59:39.322055   16875 start.go:242] writing updated cluster config ...
	I1212 19:59:39.322353   16875 ssh_runner.go:195] Run: rm -f paused
	I1212 19:59:39.372729   16875 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 19:59:39.374936   16875 out.go:177] * Done! kubectl is now configured to use "addons-459174" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 19:57:24 UTC, ends at Tue 2023-12-12 20:02:39 UTC. --
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.124215967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c164fab5-5503-4f69-92df-ea4325791b14 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.124548708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87e0fe919c941076fdca6a0341af3b3984d53b0559974717327fe028a678e0b,PodSandboxId:2a51f169301d6562481c668024b964262cd343f625c632aebcd581bbbe88b8e0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702411350842342604,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wmx78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ead679f2-6a6b-4aca-8d3a-815f630208c1,},Annotations:map[string]string{io.kubernetes.container.hash: d0403b36,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18513fa0d875fbd8f39b9bec3cb80c437302b17478c0fa65b0f735b60449fd9c,PodSandboxId:e7dafb2c653d437e0d92c666a413a340f3fe1fb7e84913aee3ea91df6426ac24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411208580261268,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d66934a8-9889-4d2f-86bc-fef56154d835,},Annotations:map[string]string{io.kubernet
es.container.hash: d88c266d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238e7b9403b0b887cb064f6a8d4458dabf4acc24dbff2723df664431844efda1,PodSandboxId:52db834021bc55900168b2f734eba1d62cba55086e86f089ca6b397a8209ac7b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702411204554077938,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-plznk,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: f88981d5-a11b-40da-8fa9-7f09e276a293,},Annotations:map[string]string{io.kubernetes.container.hash: 11a9b05,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172d2de7fbe33af6a3ca7b572ea44eaa5779dce1f0aabd47709893121d2818f0,PodSandboxId:06661bd6f3eda6e8946712a33ee08903e3ffb88cc4b8bb296f8aa0707d121140,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702411178317949878,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvkcs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 17977fc0-838c-412a-86aa-829a232b30a9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6bfb97,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af324551e3acf12293b029ebd9162b1a976fedbb521fbd9c9523b6e6f0ca70d0,PodSandboxId:f7c3bdc6b5d36a04d2498953c54b8f2eedcd92d9a1e7a9e9ad5550694356564b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411158532208085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bcmc8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0a8ad48-5fbb-4b6c-9789-39c74c69bbad,},Annotations:map[string]string{io.kubernetes.container.hash: aa66ccd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16613d882c3c2d030d40a12d732b6fe8fe933931a49e5758b786754aa7b342da,PodSandboxId:ef6dfde4bc5a1bd30c6aa9af4aefdd99fc5d947c271ad107264ccf63931a826f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411156079780520,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x6662,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da87049e-0a77-40eb-add8-f242d8ac455f,},Annotations:map[string]string{io.kubernetes.container.hash: 25b43740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2206c3a64be188b62ba8b7190f9c95834565d01a239642d710f071a4a9fe7add,PodSandboxId:2c95f7e45061dada00ea77d7c3ae2ea381486411bd0f34e94c1da344946d9558,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411114264368837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d0e927-735b-402f-abc3-2ca928e96a63,},Annotations:map[string]string{io.kubernetes.container.hash: 505844d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d75d498038df17f64c00a3c91467cfc0aa1593d304a9a5ad1ca0be8b534e718d,PodSandboxId:a434e9284c215f2a531cefe898c1e31eb8addc2490229a006060ce13f335e401,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5be
d1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702411107748158508,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knjxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ec6c7b-171a-4cd6-b6c6-4a2c18baaf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 84f360c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:322e1f2f4c0f5792e19614303977e8e5cf47ba19d2c819a5ca76ea0d725989c8,PodSandboxId:52acb1149fd61de4f2af07f009c34a5d0371901d2d0f3c882c0a1166b03b9605,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702411096021072815,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xrvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56dc530-98d7-40ee-8cff-c514caeb8beb,},Annotations:map[string]string{io.kubernetes.container.hash: 78d3b817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9c4e0f38cf65f25c76fb6dbbca23d1daf1a3458df01cc190b8e78140dcea77,PodSandboxId:8b8a3b24fd7ba70d9c7cbc630d6dc317324a217d216f5cc8e838e4aaf2172e42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702411071428564505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d63904626b4f68b7186cbc0400df187,},Annotations:map[string]string{io.kubernetes.container.hash: ed0e4ca1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10044c358c5b0dd98713fafb48caaff32ed2aa8cf2f5c0bb005ce9c121deeb1b,PodSandboxId:09febea68f0741f1c383c7f8d3d7d1bde04d500f185d95d9110125e2dc53c714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e369703700
05a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702411071122868711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a330c5ce617fabecdaf0528493189,},Annotations:map[string]string{io.kubernetes.container.hash: 866f111e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639f2c74947d526bb2619d394b82afe99db79598e7f6c4569f3b5f46a1d134f2,PodSandboxId:75fc6820d0df1d0af2bb8fc55bbcb7412e8da15889f8c9f92b2a784dba052873,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a1
09c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702411071054812310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf760a953a9dcb7ed6b54b5f9631e77,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46dd22f38dc7fe87aac4b9cc4f04a0ed38d2030fb89133ddc8e08b459e081c3,PodSandboxId:e8de98c5cc9db7a1919221f26a2184c7dfcbbff203dbaaa6cdde3d4c6711360c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236
591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702411070838326796,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcd17fd662977ac40b29164d4bb3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c164fab5-5503-4f69-92df-ea4325791b14 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.162820586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9f5d130b-bffb-41d0-889a-733698059262 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.162880880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9f5d130b-bffb-41d0-889a-733698059262 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.164232835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8a025704-9100-4044-b31b-b923254f667a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.165556568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702411359165536799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=8a025704-9100-4044-b31b-b923254f667a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.166355063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62fd0f66-3a38-4223-b173-720aabc37282 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.166427902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62fd0f66-3a38-4223-b173-720aabc37282 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.166900368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87e0fe919c941076fdca6a0341af3b3984d53b0559974717327fe028a678e0b,PodSandboxId:2a51f169301d6562481c668024b964262cd343f625c632aebcd581bbbe88b8e0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702411350842342604,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wmx78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ead679f2-6a6b-4aca-8d3a-815f630208c1,},Annotations:map[string]string{io.kubernetes.container.hash: d0403b36,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18513fa0d875fbd8f39b9bec3cb80c437302b17478c0fa65b0f735b60449fd9c,PodSandboxId:e7dafb2c653d437e0d92c666a413a340f3fe1fb7e84913aee3ea91df6426ac24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411208580261268,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d66934a8-9889-4d2f-86bc-fef56154d835,},Annotations:map[string]string{io.kubernet
es.container.hash: d88c266d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238e7b9403b0b887cb064f6a8d4458dabf4acc24dbff2723df664431844efda1,PodSandboxId:52db834021bc55900168b2f734eba1d62cba55086e86f089ca6b397a8209ac7b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702411204554077938,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-plznk,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: f88981d5-a11b-40da-8fa9-7f09e276a293,},Annotations:map[string]string{io.kubernetes.container.hash: 11a9b05,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172d2de7fbe33af6a3ca7b572ea44eaa5779dce1f0aabd47709893121d2818f0,PodSandboxId:06661bd6f3eda6e8946712a33ee08903e3ffb88cc4b8bb296f8aa0707d121140,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702411178317949878,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvkcs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 17977fc0-838c-412a-86aa-829a232b30a9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6bfb97,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af324551e3acf12293b029ebd9162b1a976fedbb521fbd9c9523b6e6f0ca70d0,PodSandboxId:f7c3bdc6b5d36a04d2498953c54b8f2eedcd92d9a1e7a9e9ad5550694356564b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411158532208085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bcmc8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0a8ad48-5fbb-4b6c-9789-39c74c69bbad,},Annotations:map[string]string{io.kubernetes.container.hash: aa66ccd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16613d882c3c2d030d40a12d732b6fe8fe933931a49e5758b786754aa7b342da,PodSandboxId:ef6dfde4bc5a1bd30c6aa9af4aefdd99fc5d947c271ad107264ccf63931a826f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411156079780520,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x6662,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da87049e-0a77-40eb-add8-f242d8ac455f,},Annotations:map[string]string{io.kubernetes.container.hash: 25b43740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2206c3a64be188b62ba8b7190f9c95834565d01a239642d710f071a4a9fe7add,PodSandboxId:2c95f7e45061dada00ea77d7c3ae2ea381486411bd0f34e94c1da344946d9558,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411114264368837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d0e927-735b-402f-abc3-2ca928e96a63,},Annotations:map[string]string{io.kubernetes.container.hash: 505844d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d75d498038df17f64c00a3c91467cfc0aa1593d304a9a5ad1ca0be8b534e718d,PodSandboxId:a434e9284c215f2a531cefe898c1e31eb8addc2490229a006060ce13f335e401,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5be
d1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702411107748158508,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knjxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ec6c7b-171a-4cd6-b6c6-4a2c18baaf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 84f360c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:322e1f2f4c0f5792e19614303977e8e5cf47ba19d2c819a5ca76ea0d725989c8,PodSandboxId:52acb1149fd61de4f2af07f009c34a5d0371901d2d0f3c882c0a1166b03b9605,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702411096021072815,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xrvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56dc530-98d7-40ee-8cff-c514caeb8beb,},Annotations:map[string]string{io.kubernetes.container.hash: 78d3b817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9c4e0f38cf65f25c76fb6dbbca23d1daf1a3458df01cc190b8e78140dcea77,PodSandboxId:8b8a3b24fd7ba70d9c7cbc630d6dc317324a217d216f5cc8e838e4aaf2172e42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702411071428564505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d63904626b4f68b7186cbc0400df187,},Annotations:map[string]string{io.kubernetes.container.hash: ed0e4ca1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10044c358c5b0dd98713fafb48caaff32ed2aa8cf2f5c0bb005ce9c121deeb1b,PodSandboxId:09febea68f0741f1c383c7f8d3d7d1bde04d500f185d95d9110125e2dc53c714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e369703700
05a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702411071122868711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a330c5ce617fabecdaf0528493189,},Annotations:map[string]string{io.kubernetes.container.hash: 866f111e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639f2c74947d526bb2619d394b82afe99db79598e7f6c4569f3b5f46a1d134f2,PodSandboxId:75fc6820d0df1d0af2bb8fc55bbcb7412e8da15889f8c9f92b2a784dba052873,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a1
09c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702411071054812310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf760a953a9dcb7ed6b54b5f9631e77,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46dd22f38dc7fe87aac4b9cc4f04a0ed38d2030fb89133ddc8e08b459e081c3,PodSandboxId:e8de98c5cc9db7a1919221f26a2184c7dfcbbff203dbaaa6cdde3d4c6711360c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236
591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702411070838326796,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcd17fd662977ac40b29164d4bb3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62fd0f66-3a38-4223-b173-720aabc37282 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.201623171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ea2e6c9d-bc0d-4d79-a0a0-3e33aebbf1bf name=/runtime.v1.RuntimeService/Version
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.201762600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ea2e6c9d-bc0d-4d79-a0a0-3e33aebbf1bf name=/runtime.v1.RuntimeService/Version
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.202989663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0e5b9bcb-35a9-4820-8ccd-0506b3316bcb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.204206081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702411359204189635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=0e5b9bcb-35a9-4820-8ccd-0506b3316bcb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.204905583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=21290e5d-c60f-4a50-b654-e045813f33da name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.204979748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=21290e5d-c60f-4a50-b654-e045813f33da name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.205271332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87e0fe919c941076fdca6a0341af3b3984d53b0559974717327fe028a678e0b,PodSandboxId:2a51f169301d6562481c668024b964262cd343f625c632aebcd581bbbe88b8e0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702411350842342604,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wmx78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ead679f2-6a6b-4aca-8d3a-815f630208c1,},Annotations:map[string]string{io.kubernetes.container.hash: d0403b36,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18513fa0d875fbd8f39b9bec3cb80c437302b17478c0fa65b0f735b60449fd9c,PodSandboxId:e7dafb2c653d437e0d92c666a413a340f3fe1fb7e84913aee3ea91df6426ac24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411208580261268,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d66934a8-9889-4d2f-86bc-fef56154d835,},Annotations:map[string]string{io.kubernet
es.container.hash: d88c266d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238e7b9403b0b887cb064f6a8d4458dabf4acc24dbff2723df664431844efda1,PodSandboxId:52db834021bc55900168b2f734eba1d62cba55086e86f089ca6b397a8209ac7b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702411204554077938,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-plznk,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: f88981d5-a11b-40da-8fa9-7f09e276a293,},Annotations:map[string]string{io.kubernetes.container.hash: 11a9b05,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172d2de7fbe33af6a3ca7b572ea44eaa5779dce1f0aabd47709893121d2818f0,PodSandboxId:06661bd6f3eda6e8946712a33ee08903e3ffb88cc4b8bb296f8aa0707d121140,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702411178317949878,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvkcs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 17977fc0-838c-412a-86aa-829a232b30a9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6bfb97,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af324551e3acf12293b029ebd9162b1a976fedbb521fbd9c9523b6e6f0ca70d0,PodSandboxId:f7c3bdc6b5d36a04d2498953c54b8f2eedcd92d9a1e7a9e9ad5550694356564b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411158532208085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bcmc8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0a8ad48-5fbb-4b6c-9789-39c74c69bbad,},Annotations:map[string]string{io.kubernetes.container.hash: aa66ccd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16613d882c3c2d030d40a12d732b6fe8fe933931a49e5758b786754aa7b342da,PodSandboxId:ef6dfde4bc5a1bd30c6aa9af4aefdd99fc5d947c271ad107264ccf63931a826f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411156079780520,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x6662,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da87049e-0a77-40eb-add8-f242d8ac455f,},Annotations:map[string]string{io.kubernetes.container.hash: 25b43740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2206c3a64be188b62ba8b7190f9c95834565d01a239642d710f071a4a9fe7add,PodSandboxId:2c95f7e45061dada00ea77d7c3ae2ea381486411bd0f34e94c1da344946d9558,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411114264368837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d0e927-735b-402f-abc3-2ca928e96a63,},Annotations:map[string]string{io.kubernetes.container.hash: 505844d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d75d498038df17f64c00a3c91467cfc0aa1593d304a9a5ad1ca0be8b534e718d,PodSandboxId:a434e9284c215f2a531cefe898c1e31eb8addc2490229a006060ce13f335e401,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5be
d1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702411107748158508,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knjxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ec6c7b-171a-4cd6-b6c6-4a2c18baaf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 84f360c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:322e1f2f4c0f5792e19614303977e8e5cf47ba19d2c819a5ca76ea0d725989c8,PodSandboxId:52acb1149fd61de4f2af07f009c34a5d0371901d2d0f3c882c0a1166b03b9605,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702411096021072815,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xrvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56dc530-98d7-40ee-8cff-c514caeb8beb,},Annotations:map[string]string{io.kubernetes.container.hash: 78d3b817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9c4e0f38cf65f25c76fb6dbbca23d1daf1a3458df01cc190b8e78140dcea77,PodSandboxId:8b8a3b24fd7ba70d9c7cbc630d6dc317324a217d216f5cc8e838e4aaf2172e42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702411071428564505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d63904626b4f68b7186cbc0400df187,},Annotations:map[string]string{io.kubernetes.container.hash: ed0e4ca1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10044c358c5b0dd98713fafb48caaff32ed2aa8cf2f5c0bb005ce9c121deeb1b,PodSandboxId:09febea68f0741f1c383c7f8d3d7d1bde04d500f185d95d9110125e2dc53c714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e369703700
05a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702411071122868711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a330c5ce617fabecdaf0528493189,},Annotations:map[string]string{io.kubernetes.container.hash: 866f111e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639f2c74947d526bb2619d394b82afe99db79598e7f6c4569f3b5f46a1d134f2,PodSandboxId:75fc6820d0df1d0af2bb8fc55bbcb7412e8da15889f8c9f92b2a784dba052873,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a1
09c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702411071054812310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf760a953a9dcb7ed6b54b5f9631e77,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46dd22f38dc7fe87aac4b9cc4f04a0ed38d2030fb89133ddc8e08b459e081c3,PodSandboxId:e8de98c5cc9db7a1919221f26a2184c7dfcbbff203dbaaa6cdde3d4c6711360c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236
591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702411070838326796,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcd17fd662977ac40b29164d4bb3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=21290e5d-c60f-4a50-b654-e045813f33da name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.235798531Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=65f92c68-f3c2-4f46-a955-6810bba388ef name=/runtime.v1.RuntimeService/Status
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.235874847Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=65f92c68-f3c2-4f46-a955-6810bba388ef name=/runtime.v1.RuntimeService/Status
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.248464929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ba248846-3501-435b-bd20-b470b30501bd name=/runtime.v1.RuntimeService/Version
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.248548848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ba248846-3501-435b-bd20-b470b30501bd name=/runtime.v1.RuntimeService/Version
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.250496587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f25586e1-a40a-4cc7-98e3-5cc7b2ea5344 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.251935047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702411359251915104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=f25586e1-a40a-4cc7-98e3-5cc7b2ea5344 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.253115373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=782b681c-f7e9-43c6-afa9-a18426f851b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.253167886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=782b681c-f7e9-43c6-afa9-a18426f851b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:02:39 addons-459174 crio[717]: time="2023-12-12 20:02:39.254385512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87e0fe919c941076fdca6a0341af3b3984d53b0559974717327fe028a678e0b,PodSandboxId:2a51f169301d6562481c668024b964262cd343f625c632aebcd581bbbe88b8e0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702411350842342604,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wmx78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ead679f2-6a6b-4aca-8d3a-815f630208c1,},Annotations:map[string]string{io.kubernetes.container.hash: d0403b36,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18513fa0d875fbd8f39b9bec3cb80c437302b17478c0fa65b0f735b60449fd9c,PodSandboxId:e7dafb2c653d437e0d92c666a413a340f3fe1fb7e84913aee3ea91df6426ac24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411208580261268,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d66934a8-9889-4d2f-86bc-fef56154d835,},Annotations:map[string]string{io.kubernet
es.container.hash: d88c266d,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238e7b9403b0b887cb064f6a8d4458dabf4acc24dbff2723df664431844efda1,PodSandboxId:52db834021bc55900168b2f734eba1d62cba55086e86f089ca6b397a8209ac7b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702411204554077938,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-plznk,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: f88981d5-a11b-40da-8fa9-7f09e276a293,},Annotations:map[string]string{io.kubernetes.container.hash: 11a9b05,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172d2de7fbe33af6a3ca7b572ea44eaa5779dce1f0aabd47709893121d2818f0,PodSandboxId:06661bd6f3eda6e8946712a33ee08903e3ffb88cc4b8bb296f8aa0707d121140,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702411178317949878,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvkcs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 17977fc0-838c-412a-86aa-829a232b30a9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6bfb97,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af324551e3acf12293b029ebd9162b1a976fedbb521fbd9c9523b6e6f0ca70d0,PodSandboxId:f7c3bdc6b5d36a04d2498953c54b8f2eedcd92d9a1e7a9e9ad5550694356564b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411158532208085,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bcmc8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0a8ad48-5fbb-4b6c-9789-39c74c69bbad,},Annotations:map[string]string{io.kubernetes.container.hash: aa66ccd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16613d882c3c2d030d40a12d732b6fe8fe933931a49e5758b786754aa7b342da,PodSandboxId:ef6dfde4bc5a1bd30c6aa9af4aefdd99fc5d947c271ad107264ccf63931a826f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702411156079780520,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x6662,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da87049e-0a77-40eb-add8-f242d8ac455f,},Annotations:map[string]string{io.kubernetes.container.hash: 25b43740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2206c3a64be188b62ba8b7190f9c95834565d01a239642d710f071a4a9fe7add,PodSandboxId:2c95f7e45061dada00ea77d7c3ae2ea381486411bd0f34e94c1da344946d9558,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411114264368837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d0e927-735b-402f-abc3-2ca928e96a63,},Annotations:map[string]string{io.kubernetes.container.hash: 505844d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d75d498038df17f64c00a3c91467cfc0aa1593d304a9a5ad1ca0be8b534e718d,PodSandboxId:a434e9284c215f2a531cefe898c1e31eb8addc2490229a006060ce13f335e401,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5be
d1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702411107748158508,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knjxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ec6c7b-171a-4cd6-b6c6-4a2c18baaf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 84f360c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:322e1f2f4c0f5792e19614303977e8e5cf47ba19d2c819a5ca76ea0d725989c8,PodSandboxId:52acb1149fd61de4f2af07f009c34a5d0371901d2d0f3c882c0a1166b03b9605,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702411096021072815,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xrvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56dc530-98d7-40ee-8cff-c514caeb8beb,},Annotations:map[string]string{io.kubernetes.container.hash: 78d3b817,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9c4e0f38cf65f25c76fb6dbbca23d1daf1a3458df01cc190b8e78140dcea77,PodSandboxId:8b8a3b24fd7ba70d9c7cbc630d6dc317324a217d216f5cc8e838e4aaf2172e42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702411071428564505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d63904626b4f68b7186cbc0400df187,},Annotations:map[string]string{io.kubernetes.container.hash: ed0e4ca1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10044c358c5b0dd98713fafb48caaff32ed2aa8cf2f5c0bb005ce9c121deeb1b,PodSandboxId:09febea68f0741f1c383c7f8d3d7d1bde04d500f185d95d9110125e2dc53c714,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e369703700
05a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702411071122868711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0a330c5ce617fabecdaf0528493189,},Annotations:map[string]string{io.kubernetes.container.hash: 866f111e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639f2c74947d526bb2619d394b82afe99db79598e7f6c4569f3b5f46a1d134f2,PodSandboxId:75fc6820d0df1d0af2bb8fc55bbcb7412e8da15889f8c9f92b2a784dba052873,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a1
09c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702411071054812310,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf760a953a9dcb7ed6b54b5f9631e77,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46dd22f38dc7fe87aac4b9cc4f04a0ed38d2030fb89133ddc8e08b459e081c3,PodSandboxId:e8de98c5cc9db7a1919221f26a2184c7dfcbbff203dbaaa6cdde3d4c6711360c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236
591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702411070838326796,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-459174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcd17fd662977ac40b29164d4bb3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=782b681c-f7e9-43c6-afa9-a18426f851b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f87e0fe919c94       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   2a51f169301d6       hello-world-app-5d77478584-wmx78
	18513fa0d875f       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   e7dafb2c653d4       nginx
	238e7b9403b0b       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   52db834021bc5       headlamp-777fd4b855-plznk
	172d2de7fbe33       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   06661bd6f3eda       gcp-auth-d4c87556c-rvkcs
	af324551e3acf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   f7c3bdc6b5d36       ingress-nginx-admission-patch-bcmc8
	16613d882c3c2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   ef6dfde4bc5a1       ingress-nginx-admission-create-x6662
	2206c3a64be18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   2c95f7e45061d       storage-provisioner
	d75d498038df1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   a434e9284c215       kube-proxy-knjxm
	322e1f2f4c0f5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   52acb1149fd61       coredns-5dd5756b68-xrvs4
	be9c4e0f38cf6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   8b8a3b24fd7ba       etcd-addons-459174
	10044c358c5b0       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   09febea68f074       kube-apiserver-addons-459174
	639f2c74947d5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   75fc6820d0df1       kube-scheduler-addons-459174
	f46dd22f38dc7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   e8de98c5cc9db       kube-controller-manager-addons-459174
	
	
	==> coredns [322e1f2f4c0f5792e19614303977e8e5cf47ba19d2c819a5ca76ea0d725989c8] <==
	[INFO] 10.244.0.8:48130 - 4814 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155436s
	[INFO] 10.244.0.8:52947 - 36988 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000216252s
	[INFO] 10.244.0.8:52947 - 2228 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000198986s
	[INFO] 10.244.0.8:36179 - 5450 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001479477s
	[INFO] 10.244.0.8:36179 - 6471 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000128344s
	[INFO] 10.244.0.8:34591 - 44937 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167421s
	[INFO] 10.244.0.8:34591 - 29323 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071793s
	[INFO] 10.244.0.8:36303 - 10644 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095852s
	[INFO] 10.244.0.8:36303 - 20368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032862s
	[INFO] 10.244.0.8:41545 - 39790 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000024314s
	[INFO] 10.244.0.8:41545 - 22379 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028157s
	[INFO] 10.244.0.8:41441 - 17351 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030267s
	[INFO] 10.244.0.8:41441 - 46021 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022038s
	[INFO] 10.244.0.8:53639 - 11136 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036564s
	[INFO] 10.244.0.8:53639 - 40323 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000023721s
	[INFO] 10.244.0.21:57360 - 52479 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000487177s
	[INFO] 10.244.0.21:47458 - 50128 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000099193s
	[INFO] 10.244.0.21:47130 - 16333 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118969s
	[INFO] 10.244.0.21:48006 - 36874 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000406483s
	[INFO] 10.244.0.21:56739 - 4041 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076366s
	[INFO] 10.244.0.21:45406 - 16286 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138823s
	[INFO] 10.244.0.21:57265 - 64092 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000758784s
	[INFO] 10.244.0.21:53727 - 19648 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000649114s
	[INFO] 10.244.0.22:39877 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00035812s
	[INFO] 10.244.0.22:46572 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116508s
	
	
	==> describe nodes <==
	Name:               addons-459174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-459174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=addons-459174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T19_57_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-459174
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 19:57:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-459174
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:02:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:01:02 +0000   Tue, 12 Dec 2023 19:57:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:01:02 +0000   Tue, 12 Dec 2023 19:57:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:01:02 +0000   Tue, 12 Dec 2023 19:57:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:01:02 +0000   Tue, 12 Dec 2023 19:57:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    addons-459174
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 144c46bcd658492fb15a5603e9111f56
	  System UUID:                144c46bc-d658-492f-b15a-5603e9111f56
	  Boot ID:                    9a0120f3-bb7c-409e-be6f-d02cd85ecfec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-wmx78         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  gcp-auth                    gcp-auth-d4c87556c-rvkcs                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  headlamp                    headlamp-777fd4b855-plznk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-5dd5756b68-xrvs4                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m27s
	  kube-system                 etcd-addons-459174                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-apiserver-addons-459174             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-controller-manager-addons-459174    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-proxy-knjxm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-addons-459174             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m50s (x8 over 4m50s)  kubelet          Node addons-459174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s (x8 over 4m50s)  kubelet          Node addons-459174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s (x7 over 4m50s)  kubelet          Node addons-459174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node addons-459174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node addons-459174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node addons-459174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m40s                  kubelet          Node addons-459174 status is now: NodeReady
	  Normal  RegisteredNode           4m28s                  node-controller  Node addons-459174 event: Registered Node addons-459174 in Controller
	
	
	==> dmesg <==
	[  +0.138156] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.024066] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.658997] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.113721] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.137022] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.104344] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.198637] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.991982] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.268528] systemd-fstab-generator[1250]: Ignoring "noauto" for root device
	[Dec12 19:58] kauditd_printk_skb: 64 callbacks suppressed
	[  +8.349906] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.627707] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.378504] kauditd_printk_skb: 18 callbacks suppressed
	[Dec12 19:59] kauditd_printk_skb: 32 callbacks suppressed
	[ +17.734753] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.068568] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.024358] kauditd_printk_skb: 12 callbacks suppressed
	[Dec12 20:00] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.062534] kauditd_printk_skb: 3 callbacks suppressed
	[ +26.705131] kauditd_printk_skb: 7 callbacks suppressed
	[Dec12 20:01] kauditd_printk_skb: 12 callbacks suppressed
	[Dec12 20:02] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [be9c4e0f38cf65f25c76fb6dbbca23d1daf1a3458df01cc190b8e78140dcea77] <==
	{"level":"warn","ts":"2023-12-12T20:00:01.804548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T20:00:01.497097Z","time spent":"307.40541ms","remote":"127.0.0.1:41496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11154,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/minions/addons-459174\" mod_revision:1145 > success:<request_put:<key:\"/registry/minions/addons-459174\" value_size:11115 >> failure:<request_range:<key:\"/registry/minions/addons-459174\" > >"}
	{"level":"warn","ts":"2023-12-12T20:00:01.804697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.663371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6231"}
	{"level":"info","ts":"2023-12-12T20:00:01.804785Z","caller":"traceutil/trace.go:171","msg":"trace[1776305126] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1430; }","duration":"103.752159ms","start":"2023-12-12T20:00:01.701027Z","end":"2023-12-12T20:00:01.804779Z","steps":["trace[1776305126] 'agreement among raft nodes before linearized reading'  (duration: 103.625684ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T20:00:01.804908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.510296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-12-12T20:00:01.804932Z","caller":"traceutil/trace.go:171","msg":"trace[1371058511] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1430; }","duration":"103.530912ms","start":"2023-12-12T20:00:01.70139Z","end":"2023-12-12T20:00:01.804921Z","steps":["trace[1371058511] 'agreement among raft nodes before linearized reading'  (duration: 103.494295ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T20:00:07.553216Z","caller":"traceutil/trace.go:171","msg":"trace[1318412877] linearizableReadLoop","detail":"{readStateIndex:1510; appliedIndex:1509; }","duration":"440.306795ms","start":"2023-12-12T20:00:07.11289Z","end":"2023-12-12T20:00:07.553197Z","steps":["trace[1318412877] 'read index received'  (duration: 440.085845ms)","trace[1318412877] 'applied index is now lower than readState.Index'  (duration: 219.648µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T20:00:07.553334Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T20:00:07.088334Z","time spent":"464.996991ms","remote":"127.0.0.1:41464","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-12-12T20:00:07.553575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.732647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"warn","ts":"2023-12-12T20:00:07.553621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.456722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2023-12-12T20:00:07.553798Z","caller":"traceutil/trace.go:171","msg":"trace[840430536] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1465; }","duration":"304.628329ms","start":"2023-12-12T20:00:07.24916Z","end":"2023-12-12T20:00:07.553789Z","steps":["trace[840430536] 'agreement among raft nodes before linearized reading'  (duration: 304.439623ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T20:00:07.553859Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T20:00:07.24915Z","time spent":"304.699859ms","remote":"127.0.0.1:41492","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":844,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"info","ts":"2023-12-12T20:00:07.553641Z","caller":"traceutil/trace.go:171","msg":"trace[1214543619] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1465; }","duration":"308.819008ms","start":"2023-12-12T20:00:07.244811Z","end":"2023-12-12T20:00:07.55363Z","steps":["trace[1214543619] 'agreement among raft nodes before linearized reading'  (duration: 308.67601ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T20:00:07.55398Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T20:00:07.244695Z","time spent":"309.280284ms","remote":"127.0.0.1:41494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2023-12-12T20:00:07.553585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"440.707844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-12T20:00:07.554057Z","caller":"traceutil/trace.go:171","msg":"trace[2118281329] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1465; }","duration":"441.194351ms","start":"2023-12-12T20:00:07.112857Z","end":"2023-12-12T20:00:07.554052Z","steps":["trace[2118281329] 'agreement among raft nodes before linearized reading'  (duration: 440.685958ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T20:00:07.554073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T20:00:07.112819Z","time spent":"441.248466ms","remote":"127.0.0.1:41480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":18,"response size":29,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true "}
	{"level":"warn","ts":"2023-12-12T20:00:37.524437Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T20:00:37.099341Z","time spent":"425.091873ms","remote":"127.0.0.1:41464","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-12-12T20:01:08.265553Z","caller":"traceutil/trace.go:171","msg":"trace[1173941563] linearizableReadLoop","detail":"{readStateIndex:1826; appliedIndex:1825; }","duration":"104.128457ms","start":"2023-12-12T20:01:08.161411Z","end":"2023-12-12T20:01:08.265539Z","steps":["trace[1173941563] 'read index received'  (duration: 103.899527ms)","trace[1173941563] 'applied index is now lower than readState.Index'  (duration: 227.921µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T20:01:08.26567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.238078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/csi-hostpath-sc\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T20:01:08.265688Z","caller":"traceutil/trace.go:171","msg":"trace[473398005] range","detail":"{range_begin:/registry/storageclasses/csi-hostpath-sc; range_end:; response_count:0; response_revision:1761; }","duration":"104.297414ms","start":"2023-12-12T20:01:08.161385Z","end":"2023-12-12T20:01:08.265683Z","steps":["trace[473398005] 'agreement among raft nodes before linearized reading'  (duration: 104.219734ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T20:01:08.265912Z","caller":"traceutil/trace.go:171","msg":"trace[427769196] transaction","detail":"{read_only:false; response_revision:1761; number_of_response:1; }","duration":"145.003316ms","start":"2023-12-12T20:01:08.120899Z","end":"2023-12-12T20:01:08.265903Z","steps":["trace[427769196] 'process raft request'  (duration: 144.452754ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T20:01:13.376509Z","caller":"traceutil/trace.go:171","msg":"trace[1186144495] transaction","detail":"{read_only:false; response_revision:1797; number_of_response:1; }","duration":"157.508288ms","start":"2023-12-12T20:01:13.218979Z","end":"2023-12-12T20:01:13.376488Z","steps":["trace[1186144495] 'process raft request'  (duration: 157.204828ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T20:01:13.37703Z","caller":"traceutil/trace.go:171","msg":"trace[1444774982] linearizableReadLoop","detail":"{readStateIndex:1863; appliedIndex:1863; }","duration":"102.487899ms","start":"2023-12-12T20:01:13.274526Z","end":"2023-12-12T20:01:13.377014Z","steps":["trace[1444774982] 'read index received'  (duration: 102.482906ms)","trace[1444774982] 'applied index is now lower than readState.Index'  (duration: 3.757µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T20:01:13.37721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.69132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T20:01:13.377277Z","caller":"traceutil/trace.go:171","msg":"trace[499756678] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1797; }","duration":"102.772833ms","start":"2023-12-12T20:01:13.274494Z","end":"2023-12-12T20:01:13.377267Z","steps":["trace[499756678] 'agreement among raft nodes before linearized reading'  (duration: 102.617347ms)"],"step_count":1}
	
	
	==> gcp-auth [172d2de7fbe33af6a3ca7b572ea44eaa5779dce1f0aabd47709893121d2818f0] <==
	2023/12/12 19:59:38 GCP Auth Webhook started!
	2023/12/12 19:59:49 Ready to marshal response ...
	2023/12/12 19:59:49 Ready to write response ...
	2023/12/12 19:59:51 Ready to marshal response ...
	2023/12/12 19:59:51 Ready to write response ...
	2023/12/12 19:59:51 Ready to marshal response ...
	2023/12/12 19:59:51 Ready to write response ...
	2023/12/12 19:59:56 Ready to marshal response ...
	2023/12/12 19:59:56 Ready to write response ...
	2023/12/12 19:59:56 Ready to marshal response ...
	2023/12/12 19:59:56 Ready to write response ...
	2023/12/12 19:59:56 Ready to marshal response ...
	2023/12/12 19:59:56 Ready to write response ...
	2023/12/12 19:59:57 Ready to marshal response ...
	2023/12/12 19:59:57 Ready to write response ...
	2023/12/12 20:00:03 Ready to marshal response ...
	2023/12/12 20:00:03 Ready to write response ...
	2023/12/12 20:00:15 Ready to marshal response ...
	2023/12/12 20:00:15 Ready to write response ...
	2023/12/12 20:00:33 Ready to marshal response ...
	2023/12/12 20:00:33 Ready to write response ...
	2023/12/12 20:00:51 Ready to marshal response ...
	2023/12/12 20:00:51 Ready to write response ...
	2023/12/12 20:02:28 Ready to marshal response ...
	2023/12/12 20:02:28 Ready to write response ...
	
	
	==> kernel <==
	 20:02:39 up 5 min,  0 users,  load average: 0.75, 1.66, 0.89
	Linux addons-459174 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [10044c358c5b0dd98713fafb48caaff32ed2aa8cf2f5c0bb005ce9c121deeb1b] <==
	I1212 19:59:57.706297       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 19:59:58.006959       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.239.50"}
	I1212 20:00:00.724230       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1212 20:00:19.367383       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1212 20:00:46.561223       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 20:01:08.903089       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.903153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.910171       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.911794       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.919995       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.920234       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.935292       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.935360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.955672       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.955934       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.970807       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.970902       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.986239       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.986320       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 20:01:08.999446       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 20:01:08.999523       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 20:01:09.971224       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 20:01:10.005962       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1212 20:01:10.030081       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1212 20:02:28.764561       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.88.56"}
	
	
	==> kube-controller-manager [f46dd22f38dc7fe87aac4b9cc4f04a0ed38d2030fb89133ddc8e08b459e081c3] <==
	W1212 20:01:42.015116       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:01:42.015173       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 20:01:49.107004       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:01:49.107096       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 20:01:51.530212       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:01:51.530277       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 20:02:16.232561       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:02:16.232633       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 20:02:28.514460       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1212 20:02:28.554893       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-wmx78"
	I1212 20:02:28.575961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.762568ms"
	I1212 20:02:28.596603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.499226ms"
	I1212 20:02:28.596832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="111.361µs"
	I1212 20:02:28.597000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.045µs"
	I1212 20:02:31.229402       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1212 20:02:31.235051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.003µs"
	I1212 20:02:31.242045       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 20:02:31.406834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.596902ms"
	I1212 20:02:31.407908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.321µs"
	W1212 20:02:32.614916       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:02:32.614983       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 20:02:35.651035       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:02:35.651329       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 20:02:35.934911       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 20:02:35.934962       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [d75d498038df17f64c00a3c91467cfc0aa1593d304a9a5ad1ca0be8b534e718d] <==
	I1212 19:58:32.586926       1 server_others.go:69] "Using iptables proxy"
	I1212 19:58:32.731814       1 node.go:141] Successfully retrieved node IP: 192.168.39.145
	I1212 19:58:33.537527       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 19:58:33.537578       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 19:58:33.570994       1 server_others.go:152] "Using iptables Proxier"
	I1212 19:58:33.571060       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 19:58:33.571296       1 server.go:846] "Version info" version="v1.28.4"
	I1212 19:58:33.571305       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 19:58:33.580493       1 config.go:188] "Starting service config controller"
	I1212 19:58:33.580546       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 19:58:33.580572       1 config.go:97] "Starting endpoint slice config controller"
	I1212 19:58:33.580576       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 19:58:33.581137       1 config.go:315] "Starting node config controller"
	I1212 19:58:33.581144       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 19:58:33.682321       1 shared_informer.go:318] Caches are synced for service config
	I1212 19:58:33.684397       1 shared_informer.go:318] Caches are synced for node config
	I1212 19:58:33.685307       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [639f2c74947d526bb2619d394b82afe99db79598e7f6c4569f3b5f46a1d134f2] <==
	W1212 19:57:55.368337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 19:57:55.368344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 19:57:55.368368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 19:57:55.368374       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 19:57:56.178578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 19:57:56.178633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 19:57:56.311920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 19:57:56.311967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 19:57:56.363967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 19:57:56.364018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 19:57:56.434662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 19:57:56.434770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 19:57:56.486969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 19:57:56.487020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 19:57:56.511010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 19:57:56.511032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 19:57:56.571192       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 19:57:56.571349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 19:57:56.571578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 19:57:56.571592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 19:57:56.678348       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 19:57:56.678453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 19:57:56.855877       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 19:57:56.855945       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 19:57:59.557071       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 19:57:24 UTC, ends at Tue 2023-12-12 20:02:39 UTC. --
	Dec 12 20:02:28 addons-459174 kubelet[1257]: I1212 20:02:28.566954    1257 memory_manager.go:346] "RemoveStaleState removing state" podUID="96c09baf-df08-47c4-8171-db3903d25f30" containerName="csi-resizer"
	Dec 12 20:02:28 addons-459174 kubelet[1257]: I1212 20:02:28.566960    1257 memory_manager.go:346] "RemoveStaleState removing state" podUID="0bd0b35a-6889-48c9-82ab-c990ff145810" containerName="liveness-probe"
	Dec 12 20:02:28 addons-459174 kubelet[1257]: I1212 20:02:28.692024    1257 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-592d8\" (UniqueName: \"kubernetes.io/projected/ead679f2-6a6b-4aca-8d3a-815f630208c1-kube-api-access-592d8\") pod \"hello-world-app-5d77478584-wmx78\" (UID: \"ead679f2-6a6b-4aca-8d3a-815f630208c1\") " pod="default/hello-world-app-5d77478584-wmx78"
	Dec 12 20:02:28 addons-459174 kubelet[1257]: I1212 20:02:28.692102    1257 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ead679f2-6a6b-4aca-8d3a-815f630208c1-gcp-creds\") pod \"hello-world-app-5d77478584-wmx78\" (UID: \"ead679f2-6a6b-4aca-8d3a-815f630208c1\") " pod="default/hello-world-app-5d77478584-wmx78"
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.007555    1257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjwx\" (UniqueName: \"kubernetes.io/projected/d27e0916-21ec-47ef-865c-9cdf082b4dec-kube-api-access-4cjwx\") pod \"d27e0916-21ec-47ef-865c-9cdf082b4dec\" (UID: \"d27e0916-21ec-47ef-865c-9cdf082b4dec\") "
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.015160    1257 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27e0916-21ec-47ef-865c-9cdf082b4dec-kube-api-access-4cjwx" (OuterVolumeSpecName: "kube-api-access-4cjwx") pod "d27e0916-21ec-47ef-865c-9cdf082b4dec" (UID: "d27e0916-21ec-47ef-865c-9cdf082b4dec"). InnerVolumeSpecName "kube-api-access-4cjwx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.109048    1257 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4cjwx\" (UniqueName: \"kubernetes.io/projected/d27e0916-21ec-47ef-865c-9cdf082b4dec-kube-api-access-4cjwx\") on node \"addons-459174\" DevicePath \"\""
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.363780    1257 scope.go:117] "RemoveContainer" containerID="fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a"
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.412604    1257 scope.go:117] "RemoveContainer" containerID="fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a"
	Dec 12 20:02:30 addons-459174 kubelet[1257]: E1212 20:02:30.413794    1257 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a\": container with ID starting with fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a not found: ID does not exist" containerID="fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a"
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.413842    1257 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a"} err="failed to get container status \"fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a\": rpc error: code = NotFound desc = could not find container \"fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a\": container with ID starting with fdc9584945ecf84c97b9a0a9121d37bff3636fca2d233d74c534fe4998a41b8a not found: ID does not exist"
	Dec 12 20:02:30 addons-459174 kubelet[1257]: I1212 20:02:30.852604    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d27e0916-21ec-47ef-865c-9cdf082b4dec" path="/var/lib/kubelet/pods/d27e0916-21ec-47ef-865c-9cdf082b4dec/volumes"
	Dec 12 20:02:32 addons-459174 kubelet[1257]: I1212 20:02:32.851011    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a0a8ad48-5fbb-4b6c-9789-39c74c69bbad" path="/var/lib/kubelet/pods/a0a8ad48-5fbb-4b6c-9789-39c74c69bbad/volumes"
	Dec 12 20:02:32 addons-459174 kubelet[1257]: I1212 20:02:32.851455    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="da87049e-0a77-40eb-add8-f242d8ac455f" path="/var/lib/kubelet/pods/da87049e-0a77-40eb-add8-f242d8ac455f/volumes"
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.642911    1257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b41071f-b730-40ed-9f59-6bea71537dad-webhook-cert\") pod \"5b41071f-b730-40ed-9f59-6bea71537dad\" (UID: \"5b41071f-b730-40ed-9f59-6bea71537dad\") "
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.643005    1257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlxrn\" (UniqueName: \"kubernetes.io/projected/5b41071f-b730-40ed-9f59-6bea71537dad-kube-api-access-vlxrn\") pod \"5b41071f-b730-40ed-9f59-6bea71537dad\" (UID: \"5b41071f-b730-40ed-9f59-6bea71537dad\") "
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.647812    1257 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b41071f-b730-40ed-9f59-6bea71537dad-kube-api-access-vlxrn" (OuterVolumeSpecName: "kube-api-access-vlxrn") pod "5b41071f-b730-40ed-9f59-6bea71537dad" (UID: "5b41071f-b730-40ed-9f59-6bea71537dad"). InnerVolumeSpecName "kube-api-access-vlxrn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.648084    1257 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b41071f-b730-40ed-9f59-6bea71537dad-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5b41071f-b730-40ed-9f59-6bea71537dad" (UID: "5b41071f-b730-40ed-9f59-6bea71537dad"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.743905    1257 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vlxrn\" (UniqueName: \"kubernetes.io/projected/5b41071f-b730-40ed-9f59-6bea71537dad-kube-api-access-vlxrn\") on node \"addons-459174\" DevicePath \"\""
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.743940    1257 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b41071f-b730-40ed-9f59-6bea71537dad-webhook-cert\") on node \"addons-459174\" DevicePath \"\""
	Dec 12 20:02:34 addons-459174 kubelet[1257]: I1212 20:02:34.851310    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5b41071f-b730-40ed-9f59-6bea71537dad" path="/var/lib/kubelet/pods/5b41071f-b730-40ed-9f59-6bea71537dad/volumes"
	Dec 12 20:02:35 addons-459174 kubelet[1257]: I1212 20:02:35.401891    1257 scope.go:117] "RemoveContainer" containerID="a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7"
	Dec 12 20:02:35 addons-459174 kubelet[1257]: I1212 20:02:35.426262    1257 scope.go:117] "RemoveContainer" containerID="a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7"
	Dec 12 20:02:35 addons-459174 kubelet[1257]: E1212 20:02:35.427090    1257 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7\": container with ID starting with a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7 not found: ID does not exist" containerID="a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7"
	Dec 12 20:02:35 addons-459174 kubelet[1257]: I1212 20:02:35.427137    1257 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7"} err="failed to get container status \"a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7\": rpc error: code = NotFound desc = could not find container \"a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7\": container with ID starting with a9631adeb48f78986c68ccc69a29517ec39fe876e3714a2c6bed31809a416aa7 not found: ID does not exist"
	
	
	==> storage-provisioner [2206c3a64be188b62ba8b7190f9c95834565d01a239642d710f071a4a9fe7add] <==
	I1212 19:58:36.189632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 19:58:36.876359       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 19:58:36.876474       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 19:58:36.906657       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 19:58:36.907550       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1b484ee-2aab-4810-bf2f-470f638400c4", APIVersion:"v1", ResourceVersion:"865", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-459174_ad798a01-c168-43aa-9024-e20d97cbf741 became leader
	I1212 19:58:36.907609       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-459174_ad798a01-c168-43aa-9024-e20d97cbf741!
	I1212 19:58:37.108835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-459174_ad798a01-c168-43aa-9024-e20d97cbf741!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-459174 -n addons-459174
helpers_test.go:261: (dbg) Run:  kubectl --context addons-459174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (163.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-459174
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-459174: exit status 82 (2m1.370763626s)

                                                
                                                
-- stdout --
	* Stopping node "addons-459174"  ...
	* Stopping node "addons-459174"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-459174" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-459174
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-459174: exit status 11 (21.488958574s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-459174" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-459174
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-459174: exit status 11 (6.143314429s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-459174" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-459174
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-459174: exit status 11 (6.143958501s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-459174" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image load --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image load --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr: (5.089691595s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image ls: (2.335619345s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-686513" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (174.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-435457 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-435457 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.324701527s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-435457 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-435457 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.020638034s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1212 20:12:23.316371   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:13:56.433768   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:56.439067   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:56.449395   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:56.469664   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:56.509977   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:56.590301   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:56.750704   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:57.071305   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:57.711868   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:13:58.992309   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:14:01.554273   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:14:06.675175   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:14:16.916354   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-435457 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.901790914s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-435457 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.34
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons disable ingress-dns --alsologtostderr -v=1: (13.360764444s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons disable ingress --alsologtostderr -v=1
E1212 20:14:37.396544   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:14:39.385029   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons disable ingress --alsologtostderr -v=1: (7.581500794s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-435457 -n ingress-addon-legacy-435457
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-435457 logs -n 25: (1.202341826s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-686513 ssh findmnt        | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | -T /mount1                           |                             |         |         |                     |                     |
	| service        | functional-686513 service            | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| ssh            | functional-686513 ssh findmnt        | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| service        | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| ssh            | functional-686513 ssh findmnt        | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-686513                 | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| service        | functional-686513 service            | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| update-context | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-686513 ssh pgrep          | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-686513 image build -t     | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | localhost/my-image:functional-686513 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-686513                    | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-686513 image ls           | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	| delete         | -p functional-686513                 | functional-686513           | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:09 UTC |
	| start          | -p ingress-addon-legacy-435457       | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:09 UTC | 12 Dec 23 20:11 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-435457          | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:11 UTC | 12 Dec 23 20:11 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-435457          | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:11 UTC | 12 Dec 23 20:11 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-435457          | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:12 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-435457 ip       | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:14 UTC | 12 Dec 23 20:14 UTC |
	| addons         | ingress-addon-legacy-435457          | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:14 UTC | 12 Dec 23 20:14 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-435457          | ingress-addon-legacy-435457 | jenkins | v1.32.0 | 12 Dec 23 20:14 UTC | 12 Dec 23 20:14 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 20:09:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:09:49.349116   25149 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:09:49.349293   25149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:49.349304   25149 out.go:309] Setting ErrFile to fd 2...
	I1212 20:09:49.349311   25149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:49.349500   25149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:09:49.350093   25149 out.go:303] Setting JSON to false
	I1212 20:09:49.350940   25149 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3143,"bootTime":1702408646,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:09:49.350997   25149 start.go:138] virtualization: kvm guest
	I1212 20:09:49.353288   25149 out.go:177] * [ingress-addon-legacy-435457] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:09:49.355309   25149 notify.go:220] Checking for updates...
	I1212 20:09:49.355331   25149 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:09:49.356789   25149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:49.358386   25149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:09:49.359740   25149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:09:49.361103   25149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:09:49.362517   25149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:09:49.364301   25149 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:09:49.397843   25149 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 20:09:49.399085   25149 start.go:298] selected driver: kvm2
	I1212 20:09:49.399095   25149 start.go:902] validating driver "kvm2" against <nil>
	I1212 20:09:49.399116   25149 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:09:49.399769   25149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:09:49.399840   25149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 20:09:49.414328   25149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 20:09:49.414399   25149 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 20:09:49.414595   25149 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:49.414637   25149 cni.go:84] Creating CNI manager for ""
	I1212 20:09:49.414649   25149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:09:49.414659   25149 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 20:09:49.414667   25149 start_flags.go:323] config:
	{Name:ingress-addon-legacy-435457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-435457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:09:49.414808   25149 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:09:49.447935   25149 out.go:177] * Starting control plane node ingress-addon-legacy-435457 in cluster ingress-addon-legacy-435457
	I1212 20:09:49.449458   25149 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 20:09:49.473961   25149 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 20:09:49.474000   25149 cache.go:56] Caching tarball of preloaded images
	I1212 20:09:49.474135   25149 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 20:09:49.475885   25149 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 20:09:49.477133   25149 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 20:09:49.504036   25149 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 20:09:55.448348   25149 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 20:09:55.448439   25149 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 20:09:56.427913   25149 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1212 20:09:56.428245   25149 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/config.json ...
	I1212 20:09:56.428275   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/config.json: {Name:mk4c02f937c2faea9f86685f081da1b46aad0e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:56.428440   25149 start.go:365] acquiring machines lock for ingress-addon-legacy-435457: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:09:56.428479   25149 start.go:369] acquired machines lock for "ingress-addon-legacy-435457" in 18.418µs
	I1212 20:09:56.428496   25149 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-435457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-435457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:09:56.428576   25149 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 20:09:56.431373   25149 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1212 20:09:56.431545   25149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:09:56.431587   25149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:09:56.445289   25149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I1212 20:09:56.445700   25149 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:09:56.446274   25149 main.go:141] libmachine: Using API Version  1
	I1212 20:09:56.446290   25149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:09:56.446632   25149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:09:56.446824   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetMachineName
	I1212 20:09:56.446986   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:09:56.447124   25149 start.go:159] libmachine.API.Create for "ingress-addon-legacy-435457" (driver="kvm2")
	I1212 20:09:56.447149   25149 client.go:168] LocalClient.Create starting
	I1212 20:09:56.447178   25149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem
	I1212 20:09:56.447219   25149 main.go:141] libmachine: Decoding PEM data...
	I1212 20:09:56.447250   25149 main.go:141] libmachine: Parsing certificate...
	I1212 20:09:56.447318   25149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem
	I1212 20:09:56.447348   25149 main.go:141] libmachine: Decoding PEM data...
	I1212 20:09:56.447368   25149 main.go:141] libmachine: Parsing certificate...
	I1212 20:09:56.447393   25149 main.go:141] libmachine: Running pre-create checks...
	I1212 20:09:56.447408   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .PreCreateCheck
	I1212 20:09:56.447721   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetConfigRaw
	I1212 20:09:56.448128   25149 main.go:141] libmachine: Creating machine...
	I1212 20:09:56.448156   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .Create
	I1212 20:09:56.448278   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Creating KVM machine...
	I1212 20:09:56.449667   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found existing default KVM network
	I1212 20:09:56.450297   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:56.450168   25194 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I1212 20:09:56.455613   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | trying to create private KVM network mk-ingress-addon-legacy-435457 192.168.39.0/24...
	I1212 20:09:56.524790   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting up store path in /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457 ...
	I1212 20:09:56.524826   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Building disk image from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 20:09:56.524841   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | private KVM network mk-ingress-addon-legacy-435457 192.168.39.0/24 created
	I1212 20:09:56.524864   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:56.524781   25194 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:09:56.524949   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Downloading /home/jenkins/minikube-integration/17734-9188/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 20:09:56.735653   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:56.735540   25194 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa...
	I1212 20:09:57.063538   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:57.063350   25194 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/ingress-addon-legacy-435457.rawdisk...
	I1212 20:09:57.063581   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Writing magic tar header
	I1212 20:09:57.063611   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457 (perms=drwx------)
	I1212 20:09:57.063625   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Writing SSH key tar header
	I1212 20:09:57.063644   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:57.063483   25194 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457 ...
	I1212 20:09:57.063654   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457
	I1212 20:09:57.063664   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines
	I1212 20:09:57.063680   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:09:57.063711   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines (perms=drwxr-xr-x)
	I1212 20:09:57.063727   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188
	I1212 20:09:57.063739   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 20:09:57.063747   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home/jenkins
	I1212 20:09:57.063757   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube (perms=drwxr-xr-x)
	I1212 20:09:57.063770   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Checking permissions on dir: /home
	I1212 20:09:57.063794   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Skipping /home - not owner
	I1212 20:09:57.063817   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188 (perms=drwxrwxr-x)
	I1212 20:09:57.063858   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 20:09:57.063889   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 20:09:57.063908   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Creating domain...
	I1212 20:09:57.064704   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) define libvirt domain using xml: 
	I1212 20:09:57.064734   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) <domain type='kvm'>
	I1212 20:09:57.064747   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <name>ingress-addon-legacy-435457</name>
	I1212 20:09:57.064757   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <memory unit='MiB'>4096</memory>
	I1212 20:09:57.064773   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <vcpu>2</vcpu>
	I1212 20:09:57.064780   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <features>
	I1212 20:09:57.064790   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <acpi/>
	I1212 20:09:57.064798   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <apic/>
	I1212 20:09:57.064804   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <pae/>
	I1212 20:09:57.064812   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     
	I1212 20:09:57.064838   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   </features>
	I1212 20:09:57.064862   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <cpu mode='host-passthrough'>
	I1212 20:09:57.064883   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   
	I1212 20:09:57.064896   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   </cpu>
	I1212 20:09:57.064910   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <os>
	I1212 20:09:57.064932   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <type>hvm</type>
	I1212 20:09:57.064944   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <boot dev='cdrom'/>
	I1212 20:09:57.064955   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <boot dev='hd'/>
	I1212 20:09:57.064969   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <bootmenu enable='no'/>
	I1212 20:09:57.064982   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   </os>
	I1212 20:09:57.064996   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   <devices>
	I1212 20:09:57.065010   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <disk type='file' device='cdrom'>
	I1212 20:09:57.065026   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/boot2docker.iso'/>
	I1212 20:09:57.065037   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <target dev='hdc' bus='scsi'/>
	I1212 20:09:57.065049   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <readonly/>
	I1212 20:09:57.065061   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </disk>
	I1212 20:09:57.065077   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <disk type='file' device='disk'>
	I1212 20:09:57.065092   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 20:09:57.065113   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/ingress-addon-legacy-435457.rawdisk'/>
	I1212 20:09:57.065130   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <target dev='hda' bus='virtio'/>
	I1212 20:09:57.065148   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </disk>
	I1212 20:09:57.065161   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <interface type='network'>
	I1212 20:09:57.065176   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <source network='mk-ingress-addon-legacy-435457'/>
	I1212 20:09:57.065189   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <model type='virtio'/>
	I1212 20:09:57.065202   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </interface>
	I1212 20:09:57.065267   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <interface type='network'>
	I1212 20:09:57.065293   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <source network='default'/>
	I1212 20:09:57.065314   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <model type='virtio'/>
	I1212 20:09:57.065325   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </interface>
	I1212 20:09:57.065334   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <serial type='pty'>
	I1212 20:09:57.065346   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <target port='0'/>
	I1212 20:09:57.065354   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </serial>
	I1212 20:09:57.065362   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <console type='pty'>
	I1212 20:09:57.065371   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <target type='serial' port='0'/>
	I1212 20:09:57.065377   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </console>
	I1212 20:09:57.065386   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     <rng model='virtio'>
	I1212 20:09:57.065392   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)       <backend model='random'>/dev/random</backend>
	I1212 20:09:57.065404   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     </rng>
	I1212 20:09:57.065418   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     
	I1212 20:09:57.065429   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)     
	I1212 20:09:57.065439   25149 main.go:141] libmachine: (ingress-addon-legacy-435457)   </devices>
	I1212 20:09:57.065447   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) </domain>
	I1212 20:09:57.065456   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) 
	I1212 20:09:57.069504   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:bd:a7:9c in network default
	I1212 20:09:57.069986   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Ensuring networks are active...
	I1212 20:09:57.070014   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:09:57.070614   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Ensuring network default is active
	I1212 20:09:57.070841   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Ensuring network mk-ingress-addon-legacy-435457 is active
	I1212 20:09:57.071336   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Getting domain xml...
	I1212 20:09:57.071961   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Creating domain...
	I1212 20:09:58.316489   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Waiting to get IP...
	I1212 20:09:58.317136   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:09:58.317477   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:09:58.317501   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:58.317442   25194 retry.go:31] will retry after 213.722869ms: waiting for machine to come up
	I1212 20:09:58.532852   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:09:58.533488   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:09:58.533727   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:58.533445   25194 retry.go:31] will retry after 371.329258ms: waiting for machine to come up
	I1212 20:09:58.906156   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:09:58.906578   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:09:58.906609   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:58.906525   25194 retry.go:31] will retry after 457.264642ms: waiting for machine to come up
	I1212 20:09:59.365141   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:09:59.365477   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:09:59.365506   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:59.365424   25194 retry.go:31] will retry after 515.39051ms: waiting for machine to come up
	I1212 20:09:59.882045   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:09:59.882473   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:09:59.882503   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:09:59.882408   25194 retry.go:31] will retry after 488.049976ms: waiting for machine to come up
	I1212 20:10:00.372129   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:00.372549   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:00.372572   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:00.372499   25194 retry.go:31] will retry after 625.637052ms: waiting for machine to come up
	I1212 20:10:00.999194   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:00.999647   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:00.999668   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:00.999611   25194 retry.go:31] will retry after 1.069359727s: waiting for machine to come up
	I1212 20:10:02.070314   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:02.070820   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:02.070847   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:02.070761   25194 retry.go:31] will retry after 1.097228034s: waiting for machine to come up
	I1212 20:10:03.169880   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:03.170301   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:03.170329   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:03.170252   25194 retry.go:31] will retry after 1.526593437s: waiting for machine to come up
	I1212 20:10:04.698899   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:04.699289   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:04.699318   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:04.699247   25194 retry.go:31] will retry after 1.512480906s: waiting for machine to come up
	I1212 20:10:06.213879   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:06.214384   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:06.214411   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:06.214341   25194 retry.go:31] will retry after 1.990651876s: waiting for machine to come up
	I1212 20:10:08.206906   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:08.207356   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:08.207389   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:08.207303   25194 retry.go:31] will retry after 2.256812426s: waiting for machine to come up
	I1212 20:10:10.465790   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:10.466144   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:10.466175   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:10.466090   25194 retry.go:31] will retry after 2.918047373s: waiting for machine to come up
	I1212 20:10:13.386399   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:13.386717   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find current IP address of domain ingress-addon-legacy-435457 in network mk-ingress-addon-legacy-435457
	I1212 20:10:13.386741   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | I1212 20:10:13.386688   25194 retry.go:31] will retry after 5.061885574s: waiting for machine to come up
	I1212 20:10:18.451477   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.451896   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Found IP for machine: 192.168.39.34
	I1212 20:10:18.451934   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has current primary IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.451946   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Reserving static IP address...
	I1212 20:10:18.452271   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-435457", mac: "52:54:00:7c:49:2e", ip: "192.168.39.34"} in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.523065   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Getting to WaitForSSH function...
	I1212 20:10:18.523098   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Reserved static IP address: 192.168.39.34
	I1212 20:10:18.523118   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Waiting for SSH to be available...
	I1212 20:10:18.525580   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.525975   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:18.526026   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.526134   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Using SSH client type: external
	I1212 20:10:18.526168   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa (-rw-------)
	I1212 20:10:18.526194   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 20:10:18.526205   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | About to run SSH command:
	I1212 20:10:18.526215   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | exit 0
	I1212 20:10:18.618768   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | SSH cmd err, output: <nil>: 
	I1212 20:10:18.619049   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) KVM machine creation complete!
	I1212 20:10:18.619403   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetConfigRaw
	I1212 20:10:18.619846   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:18.620021   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:18.620204   25149 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 20:10:18.620219   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetState
	I1212 20:10:18.621648   25149 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 20:10:18.621663   25149 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 20:10:18.621669   25149 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 20:10:18.621675   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:18.624181   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.624560   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:18.624592   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.624743   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:18.624923   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:18.625094   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:18.625268   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:18.625413   25149 main.go:141] libmachine: Using SSH client type: native
	I1212 20:10:18.625799   25149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1212 20:10:18.625814   25149 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 20:10:18.750627   25149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:10:18.750658   25149 main.go:141] libmachine: Detecting the provisioner...
	I1212 20:10:18.750676   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:18.753077   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.753387   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:18.753417   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.753647   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:18.753869   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:18.754034   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:18.754146   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:18.754308   25149 main.go:141] libmachine: Using SSH client type: native
	I1212 20:10:18.754810   25149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1212 20:10:18.754835   25149 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 20:10:18.884273   25149 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 20:10:18.884370   25149 main.go:141] libmachine: found compatible host: buildroot
	I1212 20:10:18.884386   25149 main.go:141] libmachine: Provisioning with buildroot...
	I1212 20:10:18.884402   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetMachineName
	I1212 20:10:18.884676   25149 buildroot.go:166] provisioning hostname "ingress-addon-legacy-435457"
	I1212 20:10:18.884704   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetMachineName
	I1212 20:10:18.884844   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:18.887374   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.887745   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:18.887768   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:18.887898   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:18.888072   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:18.888199   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:18.888339   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:18.888481   25149 main.go:141] libmachine: Using SSH client type: native
	I1212 20:10:18.888854   25149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1212 20:10:18.888870   25149 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-435457 && echo "ingress-addon-legacy-435457" | sudo tee /etc/hostname
	I1212 20:10:19.028244   25149 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-435457
	
	I1212 20:10:19.028278   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:19.031183   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.031633   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.031670   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.031837   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:19.032038   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.032234   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.032395   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:19.032562   25149 main.go:141] libmachine: Using SSH client type: native
	I1212 20:10:19.033014   25149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1212 20:10:19.033049   25149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-435457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-435457/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-435457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:10:19.167322   25149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:10:19.167352   25149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:10:19.167384   25149 buildroot.go:174] setting up certificates
	I1212 20:10:19.167391   25149 provision.go:83] configureAuth start
	I1212 20:10:19.167401   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetMachineName
	I1212 20:10:19.167714   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetIP
	I1212 20:10:19.170288   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.170693   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.170729   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.170798   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:19.173027   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.173346   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.173375   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.173515   25149 provision.go:138] copyHostCerts
	I1212 20:10:19.173540   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:10:19.173569   25149 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:10:19.173577   25149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:10:19.173636   25149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:10:19.173720   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:10:19.173741   25149 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:10:19.173748   25149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:10:19.173774   25149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:10:19.173813   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:10:19.173828   25149 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:10:19.173835   25149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:10:19.173853   25149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:10:19.173893   25149 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-435457 san=[192.168.39.34 192.168.39.34 localhost 127.0.0.1 minikube ingress-addon-legacy-435457]
	I1212 20:10:19.332602   25149 provision.go:172] copyRemoteCerts
	I1212 20:10:19.332662   25149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:10:19.332686   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:19.335040   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.335364   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.335394   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.335559   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:19.335739   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.335883   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:19.335972   25149 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa Username:docker}
	I1212 20:10:19.432508   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:10:19.432588   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:10:19.455862   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:10:19.455935   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1212 20:10:19.477992   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:10:19.478070   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:10:19.500437   25149 provision.go:86] duration metric: configureAuth took 333.034966ms
	I1212 20:10:19.500461   25149 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:10:19.500660   25149 config.go:182] Loaded profile config "ingress-addon-legacy-435457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 20:10:19.500747   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:19.502950   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.503224   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.503266   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.503412   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:19.503590   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.503789   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.503954   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:19.504121   25149 main.go:141] libmachine: Using SSH client type: native
	I1212 20:10:19.504442   25149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1212 20:10:19.504458   25149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:10:19.826160   25149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:10:19.826247   25149 main.go:141] libmachine: Checking connection to Docker...
	I1212 20:10:19.826285   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetURL
	I1212 20:10:19.827635   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Using libvirt version 6000000
	I1212 20:10:19.830003   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.830324   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.830366   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.830539   25149 main.go:141] libmachine: Docker is up and running!
	I1212 20:10:19.830554   25149 main.go:141] libmachine: Reticulating splines...
	I1212 20:10:19.830561   25149 client.go:171] LocalClient.Create took 23.383404693s
	I1212 20:10:19.830590   25149 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-435457" took 23.383465967s
	I1212 20:10:19.830605   25149 start.go:300] post-start starting for "ingress-addon-legacy-435457" (driver="kvm2")
	I1212 20:10:19.830620   25149 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:10:19.830642   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:19.830868   25149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:10:19.830889   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:19.833113   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.833416   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.833439   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.833589   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:19.833790   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.833965   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:19.834095   25149 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa Username:docker}
	I1212 20:10:19.924416   25149 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:10:19.928791   25149 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:10:19.928818   25149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:10:19.928890   25149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:10:19.928977   25149 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:10:19.928990   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /etc/ssl/certs/164562.pem
	I1212 20:10:19.929116   25149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:10:19.937625   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:10:19.962336   25149 start.go:303] post-start completed in 131.713442ms
	I1212 20:10:19.962385   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetConfigRaw
	I1212 20:10:19.962919   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetIP
	I1212 20:10:19.965567   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.965876   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.965909   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.966105   25149 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/config.json ...
	I1212 20:10:19.966299   25149 start.go:128] duration metric: createHost completed in 23.537713602s
	I1212 20:10:19.966321   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:19.968318   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.968629   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:19.968652   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:19.968771   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:19.969013   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.969168   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:19.969332   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:19.969503   25149 main.go:141] libmachine: Using SSH client type: native
	I1212 20:10:19.969822   25149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1212 20:10:19.969833   25149 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:10:20.096394   25149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702411820.075227931
	
	I1212 20:10:20.096418   25149 fix.go:206] guest clock: 1702411820.075227931
	I1212 20:10:20.096427   25149 fix.go:219] Guest: 2023-12-12 20:10:20.075227931 +0000 UTC Remote: 2023-12-12 20:10:19.966310339 +0000 UTC m=+30.665915176 (delta=108.917592ms)
	I1212 20:10:20.096490   25149 fix.go:190] guest clock delta is within tolerance: 108.917592ms
	I1212 20:10:20.096497   25149 start.go:83] releasing machines lock for "ingress-addon-legacy-435457", held for 23.668008527s
	I1212 20:10:20.096529   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:20.096791   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetIP
	I1212 20:10:20.099167   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:20.099555   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:20.099585   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:20.099675   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:20.100242   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:20.100403   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:20.100478   25149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:10:20.100519   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:20.100621   25149 ssh_runner.go:195] Run: cat /version.json
	I1212 20:10:20.100643   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:20.103083   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:20.103204   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:20.103465   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:20.103494   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:20.103616   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:20.103625   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:20.103660   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:20.103794   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:20.103813   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:20.104010   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:20.104026   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:20.104181   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:20.104180   25149 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa Username:docker}
	I1212 20:10:20.104306   25149 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa Username:docker}
	I1212 20:10:20.193187   25149 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:20.222998   25149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:10:20.903736   25149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:10:20.910749   25149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:10:20.910847   25149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:10:20.927493   25149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:10:20.927515   25149 start.go:475] detecting cgroup driver to use...
	I1212 20:10:20.927579   25149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:10:20.944251   25149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:10:20.960015   25149 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:10:20.960077   25149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:10:20.974021   25149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:10:20.987930   25149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:10:21.096718   25149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:10:21.225767   25149 docker.go:219] disabling docker service ...
	I1212 20:10:21.225849   25149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:10:21.240439   25149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:10:21.253152   25149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:10:21.377827   25149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:10:21.489125   25149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:10:21.501978   25149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:10:21.519648   25149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 20:10:21.519744   25149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:21.528691   25149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:10:21.528772   25149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:21.537887   25149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:21.547171   25149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:21.556368   25149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:10:21.565741   25149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:10:21.574096   25149 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:10:21.574165   25149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 20:10:21.586960   25149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:10:21.595264   25149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:21.697709   25149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:10:21.863508   25149 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:10:21.863592   25149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:10:21.868985   25149 start.go:543] Will wait 60s for crictl version
	I1212 20:10:21.869049   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:21.874228   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:10:21.911140   25149 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:10:21.911227   25149 ssh_runner.go:195] Run: crio --version
	I1212 20:10:21.958658   25149 ssh_runner.go:195] Run: crio --version
	I1212 20:10:22.010081   25149 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1212 20:10:22.011872   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetIP
	I1212 20:10:22.014470   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:22.014808   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:22.014842   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:22.015104   25149 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:22.019440   25149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:22.032530   25149 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 20:10:22.032598   25149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:22.069981   25149 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 20:10:22.070052   25149 ssh_runner.go:195] Run: which lz4
	I1212 20:10:22.073839   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 20:10:22.073941   25149 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 20:10:22.077955   25149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:10:22.077994   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1212 20:10:24.050510   25149 crio.go:444] Took 1.976601 seconds to copy over tarball
	I1212 20:10:24.050591   25149 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:10:27.169641   25149 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.119025846s)
	I1212 20:10:27.169669   25149 crio.go:451] Took 3.119132 seconds to extract the tarball
	I1212 20:10:27.169681   25149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:10:27.213488   25149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:27.276619   25149 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 20:10:27.276643   25149 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 20:10:27.276732   25149 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 20:10:27.276752   25149 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 20:10:27.276751   25149 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 20:10:27.276789   25149 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 20:10:27.276810   25149 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 20:10:27.276741   25149 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 20:10:27.276730   25149 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:27.276788   25149 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 20:10:27.277842   25149 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 20:10:27.277868   25149 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 20:10:27.277894   25149 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 20:10:27.277844   25149 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 20:10:27.277938   25149 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 20:10:27.277974   25149 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:27.277842   25149 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 20:10:27.277842   25149 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 20:10:27.471797   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 20:10:27.478295   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1212 20:10:27.481435   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 20:10:27.489107   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 20:10:27.489895   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1212 20:10:27.494160   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1212 20:10:27.498493   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 20:10:27.560216   25149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1212 20:10:27.560264   25149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 20:10:27.560320   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.572857   25149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:27.626484   25149 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1212 20:10:27.626536   25149 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 20:10:27.626589   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.628581   25149 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1212 20:10:27.628617   25149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 20:10:27.628652   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.691299   25149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1212 20:10:27.691345   25149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 20:10:27.691360   25149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1212 20:10:27.691381   25149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 20:10:27.691390   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.691414   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.691602   25149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1212 20:10:27.691639   25149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 20:10:27.691684   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.733302   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 20:10:27.733552   25149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 20:10:27.733586   25149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 20:10:27.733628   25149 ssh_runner.go:195] Run: which crictl
	I1212 20:10:27.807580   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 20:10:27.807588   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 20:10:27.807662   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 20:10:27.807730   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 20:10:27.807763   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 20:10:27.807815   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 20:10:27.807853   25149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 20:10:27.946140   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1212 20:10:27.946206   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 20:10:27.947507   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 20:10:27.947597   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 20:10:27.947625   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 20:10:27.947648   25149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 20:10:27.947687   25149 cache_images.go:92] LoadImages completed in 671.032544ms
	W1212 20:10:27.947786   25149 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I1212 20:10:27.947858   25149 ssh_runner.go:195] Run: crio config
	I1212 20:10:28.006531   25149 cni.go:84] Creating CNI manager for ""
	I1212 20:10:28.006553   25149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:10:28.006569   25149 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:28.006592   25149 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-435457 NodeName:ingress-addon-legacy-435457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 20:10:28.006770   25149 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-435457"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:28.006875   25149 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-435457 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-435457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:10:28.006934   25149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 20:10:28.016372   25149 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 20:10:28.016447   25149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:28.025300   25149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1212 20:10:28.041247   25149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 20:10:28.057576   25149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1212 20:10:28.073139   25149 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:28.076662   25149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:28.087834   25149 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457 for IP: 192.168.39.34
	I1212 20:10:28.087862   25149 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.087997   25149 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:10:28.088035   25149 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:10:28.088092   25149 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.key
	I1212 20:10:28.088106   25149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt with IP's: []
	I1212 20:10:28.324521   25149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt ...
	I1212 20:10:28.324549   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: {Name:mkbba75b38cd69cca8cf4f1b0c96a68424b6159f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.324703   25149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.key ...
	I1212 20:10:28.324717   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.key: {Name:mkde57f6c27f23168b805b74ebe01a7dd7fe5530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.324826   25149 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key.5427ad8d
	I1212 20:10:28.324842   25149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt.5427ad8d with IP's: [192.168.39.34 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 20:10:28.403810   25149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt.5427ad8d ...
	I1212 20:10:28.403849   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt.5427ad8d: {Name:mk3cebf50df6cbc7f19c93eb9417e7431203fae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.404000   25149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key.5427ad8d ...
	I1212 20:10:28.404012   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key.5427ad8d: {Name:mk01af2181c4026aa6a7b19c73a48bb19122c90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.404074   25149 certs.go:337] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt.5427ad8d -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt
	I1212 20:10:28.404134   25149 certs.go:341] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key.5427ad8d -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key
	I1212 20:10:28.404190   25149 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.key
	I1212 20:10:28.404204   25149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.crt with IP's: []
	I1212 20:10:28.515588   25149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.crt ...
	I1212 20:10:28.515615   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.crt: {Name:mkd469f096df3e7501889cc76a23a8ac70d8dd4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.515764   25149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.key ...
	I1212 20:10:28.515776   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.key: {Name:mke0550e740002462bb4f2fe697265ac99491cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:28.515847   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:10:28.515872   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:10:28.515883   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:10:28.515895   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:10:28.515904   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:10:28.515916   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:10:28.515926   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:10:28.515939   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:10:28.515995   25149 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:10:28.516025   25149 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:10:28.516035   25149 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:10:28.516057   25149 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:10:28.516079   25149 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:28.516099   25149 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:10:28.516136   25149 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:10:28.516161   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:28.516174   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem -> /usr/share/ca-certificates/16456.pem
	I1212 20:10:28.516185   25149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /usr/share/ca-certificates/164562.pem
	I1212 20:10:28.516857   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 20:10:28.540839   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:10:28.565429   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:28.587959   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:10:28.610695   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:28.638095   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:28.661458   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:28.684571   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:10:28.708916   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:28.733019   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:10:28.757338   25149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:10:28.780907   25149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:28.798863   25149 ssh_runner.go:195] Run: openssl version
	I1212 20:10:28.804565   25149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:10:28.816229   25149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:28.821158   25149 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:28.821216   25149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:28.826789   25149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:10:28.836867   25149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:10:28.847176   25149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:10:28.851578   25149 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:10:28.851645   25149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:10:28.856906   25149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:10:28.866660   25149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:10:28.876755   25149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:10:28.881063   25149 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:10:28.881111   25149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:10:28.886304   25149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:10:28.896229   25149 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:10:28.900146   25149 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:10:28.900212   25149 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-435457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-435457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:10:28.900324   25149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:28.900377   25149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:28.937107   25149 cri.go:89] found id: ""
	I1212 20:10:28.937190   25149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:28.946709   25149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:28.955883   25149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:28.964797   25149 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:28.964847   25149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 20:10:29.018280   25149 kubeadm.go:322] W1212 20:10:29.009928     962 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 20:10:29.152946   25149 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:10:32.110800   25149 kubeadm.go:322] W1212 20:10:32.104612     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 20:10:32.111875   25149 kubeadm.go:322] W1212 20:10:32.105664     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 20:10:43.150973   25149 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 20:10:43.151051   25149 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 20:10:43.151152   25149 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:10:43.151272   25149 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:10:43.151438   25149 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 20:10:43.151579   25149 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:10:43.151676   25149 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:10:43.151724   25149 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 20:10:43.151811   25149 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:10:43.153241   25149 out.go:204]   - Generating certificates and keys ...
	I1212 20:10:43.153322   25149 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 20:10:43.153403   25149 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 20:10:43.153487   25149 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:10:43.153569   25149 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:10:43.153641   25149 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:10:43.153717   25149 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 20:10:43.153781   25149 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 20:10:43.153890   25149 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-435457 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I1212 20:10:43.153960   25149 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 20:10:43.154115   25149 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-435457 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I1212 20:10:43.154188   25149 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:10:43.154242   25149 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:10:43.154291   25149 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 20:10:43.154390   25149 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:10:43.154474   25149 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:10:43.154526   25149 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:10:43.154627   25149 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:10:43.154713   25149 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:10:43.154805   25149 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:10:43.155971   25149 out.go:204]   - Booting up control plane ...
	I1212 20:10:43.156044   25149 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:10:43.156115   25149 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:10:43.156194   25149 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:10:43.156267   25149 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:10:43.156407   25149 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 20:10:43.156487   25149 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503461 seconds
	I1212 20:10:43.156613   25149 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:10:43.156773   25149 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:10:43.156860   25149 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:10:43.156991   25149 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-435457 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 20:10:43.157038   25149 kubeadm.go:322] [bootstrap-token] Using token: o2bl9y.3c1ig0pczjni2546
	I1212 20:10:43.158526   25149 out.go:204]   - Configuring RBAC rules ...
	I1212 20:10:43.158611   25149 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:10:43.158699   25149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:10:43.158864   25149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:10:43.159030   25149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:10:43.159141   25149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:10:43.159269   25149 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:10:43.159410   25149 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:10:43.159482   25149 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 20:10:43.159550   25149 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 20:10:43.159564   25149 kubeadm.go:322] 
	I1212 20:10:43.159647   25149 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 20:10:43.159655   25149 kubeadm.go:322] 
	I1212 20:10:43.159755   25149 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 20:10:43.159767   25149 kubeadm.go:322] 
	I1212 20:10:43.159787   25149 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 20:10:43.159879   25149 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:10:43.159960   25149 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:10:43.159970   25149 kubeadm.go:322] 
	I1212 20:10:43.160046   25149 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 20:10:43.160143   25149 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:10:43.160222   25149 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:10:43.160230   25149 kubeadm.go:322] 
	I1212 20:10:43.160300   25149 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:10:43.160364   25149 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 20:10:43.160370   25149 kubeadm.go:322] 
	I1212 20:10:43.160443   25149 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o2bl9y.3c1ig0pczjni2546 \
	I1212 20:10:43.160618   25149 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 20:10:43.160665   25149 kubeadm.go:322]     --control-plane 
	I1212 20:10:43.160676   25149 kubeadm.go:322] 
	I1212 20:10:43.160790   25149 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:10:43.160802   25149 kubeadm.go:322] 
	I1212 20:10:43.160913   25149 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o2bl9y.3c1ig0pczjni2546 \
	I1212 20:10:43.161077   25149 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 20:10:43.161087   25149 cni.go:84] Creating CNI manager for ""
	I1212 20:10:43.161093   25149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:10:43.162639   25149 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 20:10:43.163820   25149 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 20:10:43.174323   25149 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 20:10:43.193683   25149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:10:43.193748   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=ingress-addon-legacy-435457 minikube.k8s.io/updated_at=2023_12_12T20_10_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:43.193753   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:43.572490   25149 ops.go:34] apiserver oom_adj: -16
	I1212 20:10:43.572513   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:43.707611   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:44.293103   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:44.793231   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:45.292571   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:45.793357   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:46.292477   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:46.793223   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:47.292577   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:47.793489   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:48.293411   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:48.793141   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:49.293399   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:49.793063   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:50.293504   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:50.793550   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:51.292722   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:51.793185   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:52.293332   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:52.792701   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:53.293479   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:53.792521   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.293399   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.793060   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.292771   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.792670   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.292565   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.793204   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:57.293493   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:57.792764   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:58.292574   25149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:58.456797   25149 kubeadm.go:1088] duration metric: took 15.263108663s to wait for elevateKubeSystemPrivileges.
	I1212 20:10:58.456835   25149 kubeadm.go:406] StartCluster complete in 29.556639894s
	I1212 20:10:58.456852   25149 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:58.456919   25149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:10:58.457591   25149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:58.457787   25149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:10:58.457932   25149 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 20:10:58.458013   25149 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-435457"
	I1212 20:10:58.458025   25149 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-435457"
	I1212 20:10:58.458035   25149 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-435457"
	I1212 20:10:58.458040   25149 config.go:182] Loaded profile config "ingress-addon-legacy-435457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 20:10:58.458062   25149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-435457"
	I1212 20:10:58.458089   25149 host.go:66] Checking if "ingress-addon-legacy-435457" exists ...
	I1212 20:10:58.458453   25149 kapi.go:59] client config for ingress-addon-legacy-435457: &rest.Config{Host:"https://192.168.39.34:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:10:58.458544   25149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:10:58.458560   25149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:10:58.458578   25149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:10:58.458584   25149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:10:58.459198   25149 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 20:10:58.474191   25149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
	I1212 20:10:58.474525   25149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I1212 20:10:58.474666   25149 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:10:58.474917   25149 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:10:58.475155   25149 main.go:141] libmachine: Using API Version  1
	I1212 20:10:58.475177   25149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:10:58.475390   25149 main.go:141] libmachine: Using API Version  1
	I1212 20:10:58.475409   25149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:10:58.475499   25149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:10:58.475651   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetState
	I1212 20:10:58.475716   25149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:10:58.476281   25149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:10:58.476329   25149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:10:58.478008   25149 kapi.go:59] client config for ingress-addon-legacy-435457: &rest.Config{Host:"https://192.168.39.34:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:10:58.478284   25149 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-435457"
	I1212 20:10:58.478316   25149 host.go:66] Checking if "ingress-addon-legacy-435457" exists ...
	I1212 20:10:58.478665   25149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:10:58.478717   25149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:10:58.491557   25149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1212 20:10:58.492010   25149 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:10:58.492641   25149 main.go:141] libmachine: Using API Version  1
	I1212 20:10:58.492669   25149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:10:58.493065   25149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:10:58.493253   25149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I1212 20:10:58.493284   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetState
	I1212 20:10:58.493692   25149 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:10:58.494288   25149 main.go:141] libmachine: Using API Version  1
	I1212 20:10:58.494315   25149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:10:58.494708   25149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:10:58.495054   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:58.495271   25149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:10:58.495305   25149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:10:58.497156   25149 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:58.498779   25149 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:58.498802   25149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:10:58.498825   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:58.502192   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:58.502615   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:58.502642   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:58.502849   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:58.503056   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:58.503197   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:58.503388   25149 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa Username:docker}
	I1212 20:10:58.512053   25149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I1212 20:10:58.512453   25149 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:10:58.512938   25149 main.go:141] libmachine: Using API Version  1
	I1212 20:10:58.512965   25149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:10:58.513235   25149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:10:58.513434   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetState
	I1212 20:10:58.514937   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .DriverName
	I1212 20:10:58.515220   25149 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:58.515246   25149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:10:58.515271   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHHostname
	I1212 20:10:58.518314   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:58.518818   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:49:2e", ip: ""} in network mk-ingress-addon-legacy-435457: {Iface:virbr1 ExpiryTime:2023-12-12 21:10:12 +0000 UTC Type:0 Mac:52:54:00:7c:49:2e Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ingress-addon-legacy-435457 Clientid:01:52:54:00:7c:49:2e}
	I1212 20:10:58.518849   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | domain ingress-addon-legacy-435457 has defined IP address 192.168.39.34 and MAC address 52:54:00:7c:49:2e in network mk-ingress-addon-legacy-435457
	I1212 20:10:58.519003   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHPort
	I1212 20:10:58.519195   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHKeyPath
	I1212 20:10:58.519352   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .GetSSHUsername
	I1212 20:10:58.519497   25149 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/ingress-addon-legacy-435457/id_rsa Username:docker}
	I1212 20:10:58.639847   25149 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-435457" context rescaled to 1 replicas
	I1212 20:10:58.639894   25149 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:58.641603   25149 out.go:177] * Verifying Kubernetes components...
	I1212 20:10:58.643142   25149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:58.686371   25149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:58.699066   25149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:10:58.701021   25149 kapi.go:59] client config for ingress-addon-legacy-435457: &rest.Config{Host:"https://192.168.39.34:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:10:58.701249   25149 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-435457" to be "Ready" ...
	I1212 20:10:58.719329   25149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:58.745196   25149 node_ready.go:49] node "ingress-addon-legacy-435457" has status "Ready":"True"
	I1212 20:10:58.745222   25149 node_ready.go:38] duration metric: took 43.950874ms waiting for node "ingress-addon-legacy-435457" to be "Ready" ...
	I1212 20:10:58.745234   25149 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:10:58.784321   25149 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-n5s5z" in "kube-system" namespace to be "Ready" ...
	I1212 20:10:59.348321   25149 main.go:141] libmachine: Making call to close driver server
	I1212 20:10:59.348351   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .Close
	I1212 20:10:59.348321   25149 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 20:10:59.348659   25149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:10:59.348681   25149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:10:59.348692   25149 main.go:141] libmachine: Making call to close driver server
	I1212 20:10:59.348689   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Closing plugin on server side
	I1212 20:10:59.348702   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .Close
	I1212 20:10:59.348988   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Closing plugin on server side
	I1212 20:10:59.349027   25149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:10:59.349047   25149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:10:59.356814   25149 main.go:141] libmachine: Making call to close driver server
	I1212 20:10:59.356841   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .Close
	I1212 20:10:59.357092   25149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:10:59.357115   25149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:10:59.377061   25149 main.go:141] libmachine: Making call to close driver server
	I1212 20:10:59.377090   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .Close
	I1212 20:10:59.377372   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) DBG | Closing plugin on server side
	I1212 20:10:59.377415   25149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:10:59.377425   25149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:10:59.377439   25149 main.go:141] libmachine: Making call to close driver server
	I1212 20:10:59.377452   25149 main.go:141] libmachine: (ingress-addon-legacy-435457) Calling .Close
	I1212 20:10:59.377633   25149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:10:59.377649   25149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:10:59.379399   25149 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 20:10:59.380632   25149 addons.go:502] enable addons completed in 922.700332ms: enabled=[default-storageclass storage-provisioner]
	I1212 20:11:00.884419   25149 pod_ready.go:102] pod "coredns-66bff467f8-n5s5z" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:03.358924   25149 pod_ready.go:102] pod "coredns-66bff467f8-n5s5z" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:03.855725   25149 pod_ready.go:97] error getting pod "coredns-66bff467f8-n5s5z" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-n5s5z" not found
	I1212 20:11:03.855752   25149 pod_ready.go:81] duration metric: took 5.071402269s waiting for pod "coredns-66bff467f8-n5s5z" in "kube-system" namespace to be "Ready" ...
	E1212 20:11:03.855761   25149 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-n5s5z" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-n5s5z" not found
	I1212 20:11:03.855767   25149 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-nl957" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:05.875908   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:08.374911   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:10.873497   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:13.374003   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:15.875941   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:18.374055   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:20.374237   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:22.874268   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:25.374244   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:27.873467   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:29.874020   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:31.874309   25149 pod_ready.go:102] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"False"
	I1212 20:11:33.373809   25149 pod_ready.go:92] pod "coredns-66bff467f8-nl957" in "kube-system" namespace has status "Ready":"True"
	I1212 20:11:33.373836   25149 pod_ready.go:81] duration metric: took 29.518061762s waiting for pod "coredns-66bff467f8-nl957" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.373850   25149 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.378798   25149 pod_ready.go:92] pod "etcd-ingress-addon-legacy-435457" in "kube-system" namespace has status "Ready":"True"
	I1212 20:11:33.378831   25149 pod_ready.go:81] duration metric: took 4.967116ms waiting for pod "etcd-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.378844   25149 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.384154   25149 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-435457" in "kube-system" namespace has status "Ready":"True"
	I1212 20:11:33.384172   25149 pod_ready.go:81] duration metric: took 5.320435ms waiting for pod "kube-apiserver-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.384181   25149 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.388979   25149 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-435457" in "kube-system" namespace has status "Ready":"True"
	I1212 20:11:33.388999   25149 pod_ready.go:81] duration metric: took 4.811664ms waiting for pod "kube-controller-manager-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.389011   25149 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2hvcc" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.393816   25149 pod_ready.go:92] pod "kube-proxy-2hvcc" in "kube-system" namespace has status "Ready":"True"
	I1212 20:11:33.393835   25149 pod_ready.go:81] duration metric: took 4.815334ms waiting for pod "kube-proxy-2hvcc" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.393846   25149 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.567161   25149 request.go:629] Waited for 173.249349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-435457
	I1212 20:11:33.767612   25149 request.go:629] Waited for 197.341561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.34:8443/api/v1/nodes/ingress-addon-legacy-435457
	I1212 20:11:33.771720   25149 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-435457" in "kube-system" namespace has status "Ready":"True"
	I1212 20:11:33.771743   25149 pod_ready.go:81] duration metric: took 377.889811ms waiting for pod "kube-scheduler-ingress-addon-legacy-435457" in "kube-system" namespace to be "Ready" ...
	I1212 20:11:33.771754   25149 pod_ready.go:38] duration metric: took 35.026507138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:11:33.771789   25149 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:33.771854   25149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:33.785674   25149 api_server.go:72] duration metric: took 35.145745017s to wait for apiserver process to appear ...
	I1212 20:11:33.785705   25149 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:33.785721   25149 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I1212 20:11:33.791508   25149 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I1212 20:11:33.792735   25149 api_server.go:141] control plane version: v1.18.20
	I1212 20:11:33.792775   25149 api_server.go:131] duration metric: took 7.061826ms to wait for apiserver health ...
	I1212 20:11:33.792785   25149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:33.968201   25149 request.go:629] Waited for 175.357784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.34:8443/api/v1/namespaces/kube-system/pods
	I1212 20:11:33.974660   25149 system_pods.go:59] 7 kube-system pods found
	I1212 20:11:33.974696   25149 system_pods.go:61] "coredns-66bff467f8-nl957" [e1e91fc9-074a-4e05-abed-bd2754e7eb60] Running
	I1212 20:11:33.974704   25149 system_pods.go:61] "etcd-ingress-addon-legacy-435457" [59821c13-732f-4644-81d0-eb26b6797fbf] Running
	I1212 20:11:33.974711   25149 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-435457" [1db44314-e271-4700-9a75-6da4f4056bc1] Running
	I1212 20:11:33.974718   25149 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-435457" [47f97b2d-db56-4632-98f1-acc3ec8eb523] Running
	I1212 20:11:33.974728   25149 system_pods.go:61] "kube-proxy-2hvcc" [d490df4f-815c-477e-a01d-dd2efce51bf3] Running
	I1212 20:11:33.974734   25149 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-435457" [d978ef54-aa9f-4239-a209-911ea32d3fd2] Running
	I1212 20:11:33.974744   25149 system_pods.go:61] "storage-provisioner" [440edbd5-4307-47b9-a5ef-29534be17485] Running
	I1212 20:11:33.974759   25149 system_pods.go:74] duration metric: took 181.966993ms to wait for pod list to return data ...
	I1212 20:11:33.974773   25149 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:34.167151   25149 request.go:629] Waited for 192.2983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.34:8443/api/v1/namespaces/default/serviceaccounts
	I1212 20:11:34.170351   25149 default_sa.go:45] found service account: "default"
	I1212 20:11:34.170379   25149 default_sa.go:55] duration metric: took 195.595861ms for default service account to be created ...
	I1212 20:11:34.170390   25149 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:11:34.367654   25149 request.go:629] Waited for 197.196873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.34:8443/api/v1/namespaces/kube-system/pods
	I1212 20:11:34.374799   25149 system_pods.go:86] 7 kube-system pods found
	I1212 20:11:34.374828   25149 system_pods.go:89] "coredns-66bff467f8-nl957" [e1e91fc9-074a-4e05-abed-bd2754e7eb60] Running
	I1212 20:11:34.374836   25149 system_pods.go:89] "etcd-ingress-addon-legacy-435457" [59821c13-732f-4644-81d0-eb26b6797fbf] Running
	I1212 20:11:34.374842   25149 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-435457" [1db44314-e271-4700-9a75-6da4f4056bc1] Running
	I1212 20:11:34.374847   25149 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-435457" [47f97b2d-db56-4632-98f1-acc3ec8eb523] Running
	I1212 20:11:34.374853   25149 system_pods.go:89] "kube-proxy-2hvcc" [d490df4f-815c-477e-a01d-dd2efce51bf3] Running
	I1212 20:11:34.374859   25149 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-435457" [d978ef54-aa9f-4239-a209-911ea32d3fd2] Running
	I1212 20:11:34.374865   25149 system_pods.go:89] "storage-provisioner" [440edbd5-4307-47b9-a5ef-29534be17485] Running
	I1212 20:11:34.374874   25149 system_pods.go:126] duration metric: took 204.47696ms to wait for k8s-apps to be running ...
	I1212 20:11:34.374887   25149 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:11:34.374938   25149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:34.389258   25149 system_svc.go:56] duration metric: took 14.361564ms WaitForService to wait for kubelet.
	I1212 20:11:34.389284   25149 kubeadm.go:581] duration metric: took 35.749361438s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:11:34.389307   25149 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:34.567766   25149 request.go:629] Waited for 178.379753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.34:8443/api/v1/nodes
	I1212 20:11:34.572216   25149 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:11:34.572252   25149 node_conditions.go:123] node cpu capacity is 2
	I1212 20:11:34.572267   25149 node_conditions.go:105] duration metric: took 182.954036ms to run NodePressure ...
	I1212 20:11:34.572286   25149 start.go:228] waiting for startup goroutines ...
	I1212 20:11:34.572299   25149 start.go:233] waiting for cluster config update ...
	I1212 20:11:34.572312   25149 start.go:242] writing updated cluster config ...
	I1212 20:11:34.572628   25149 ssh_runner.go:195] Run: rm -f paused
	I1212 20:11:34.618709   25149 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 20:11:34.620918   25149 out.go:177] 
	W1212 20:11:34.622807   25149 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 20:11:34.625788   25149 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 20:11:34.627283   25149 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-435457" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 20:10:09 UTC, ends at Tue 2023-12-12 20:14:42 UTC. --
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.897456734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6c349185-4c5b-4bc9-a970-be8a88ddb74d name=/runtime.v1.RuntimeService/Version
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.898798596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=792bfee5-b1b4-43dd-a4ad-80e23e108163 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.899333587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412081899318131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=792bfee5-b1b4-43dd-a4ad-80e23e108163 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.900052247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=98deac0a-4512-4d64-bff0-3320bd6f524c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.900104837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=98deac0a-4512-4d64-bff0-3320bd6f524c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.901000447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f82dda0453d0c1c0efbedb81a78d5e78fbb8f6edf237bb54dc5bf025d7045125,PodSandboxId:e993d904fee9953a4455f47944da74a90118095ee2ee27744cd73eea16689317,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702412062916693397,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-4vgvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 976716c6-975b-488b-9939-a06dd4ef3f02,},Annotations:map[string]string{io.kubernetes.container.hash: f63c1106,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c679723542d3244bae216be0112f1c5852b1a0a404757e2581187f8b9396c8,PodSandboxId:2eab127ad6524ad8b535ccd5213c03ebcb9874505bf8e47f8a8758f96aed5555,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411924732812848,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6fa21082,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a470deaa87f231193d0089980fc4c4fc12c21fee638cc1e8dd5a8dad2c0adb98,PodSandboxId:d326ef94a54fb892f486107d85f59536203c905c41e9914a36be21297ba53e2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702411907903670062,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-8jcs8,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1caf3a5e-75f8-4d50-921e-e633246f1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 735e5090,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8bf099f94241be1241fc293aef408541917b52a7dbf372f97ff226d43389cb6,PodSandboxId:9410ed403bd04987b25e1c7ed5e62852d48040f32b8081824723aa998c91b27c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899877458298,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-flgb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 90be23ec-efcd-4df8-ba5e-a349efd8ad98,},Annotations:map[string]string{io.kubernetes.container.hash: 66017557,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99325b9f21513041aebd0fabeaf8d18a27e2f9c2b1c54b98477798ba08a0f269,PodSandboxId:75ea41d697db4e8b01812728205599dd101d0667b9f8d51e79847364c0e2a35a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899028423954,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6xv59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ca60156-df42-4943-a4e9-a768f3baf945,},Annotations:map[string]string{io.kubernetes.container.hash: 913b0fb7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5271cd7c58810c2fcadd8c90b508aaf74ead9b2c7322ebfc2902c6d39042e,PodSandboxId:4c05518114044eee9a0082475df1a085fa6166ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411890722847227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16ba4bbc1204f012c078d9c5d54af185daff7b63acaa1b5e84b18d1e30c6aaf,PodSandboxId:8192e803c353b9e789c4dd879238665b5a21887ef1f8c3d4e56dac4953351d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702411860982071617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hvcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d490df4f-815c-477e-a01d-dd2efce51bf3,},Annotations:map[string]string{io.kubernetes.container.hash: 1e792ab7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e52c6c217862f40e766c774889430b47ce17160fbc16a22d26736ab35f97fde,PodSandboxId:6828a8dac0dae038d16707b598a78267a9db80f2c060ed9f5c3e2aa73a750810,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702411860690290433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nl957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e91fc9-074a-4e05-abed-bd2754e7eb60,},Annotations:map[string]string{io.kubernetes.container.hash: cf92fc9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dc1982385247f60558bf7cd65af97e19f6e3868f1786d97610ad8146c6c33dd,PodSandboxId:4c05518114044eee9a0082475df1a085fa61
66ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702411860060298682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b248c6e6b647fce2e90a193ae2353cfa559afc4e0f9d363175b64472b0720d5d,PodSandboxId:033ff87ea4b0b199ca111bf247a0660a9ba024
4a08993c430c2027000025711c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702411835603281251,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c52ba371dbf381ce5afc5b9659dc9f72,},Annotations:map[string]string{io.kubernetes.container.hash: 30706dad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc50581f6c5409b19eb58f6c0d73cb6fb105345d63debb26747273a0ae80e91,PodSandboxId:dddfa74c102c91b6a448c24df0736752a426e2c6b7899ed2e80ad76e4aad46a7,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702411834826628525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c9628830e64f42f96ef9f2f89e484a4546c29689a5a94ee3e9566252c108c4,PodSandboxId:63c8d0733d1fa3b4006c9d0df71aa85e92682431140690c2c9f1ec84810cf151,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702411834572297912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb6e58ec6d31719782cf36b778fe348,},Annotations:map[string]string{io.kubernetes.container.hash: 52671b67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7049d321b526be4d7ebfa4a46b029ef1e7e53287266046fcff8685ec535287,PodSandboxId:6b30d5ae2e3f5f75723bba173f3b7b174e1fbf65c43eecf84d4bf0f03dc4da66,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702411834396135357,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=98deac0a-4512-4d64-bff0-3320bd6f524c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.938156599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f2b8cbb-061b-4c87-a48a-53dff47b545f name=/runtime.v1.RuntimeService/Version
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.938245371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f2b8cbb-061b-4c87-a48a-53dff47b545f name=/runtime.v1.RuntimeService/Version
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.940068512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4786fc27-3d58-4fed-81ee-5948bfff5148 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.940644499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412081940628465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=4786fc27-3d58-4fed-81ee-5948bfff5148 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.942291619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b04d11cd-956c-4f35-9385-cd758dcfb4a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.942364498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b04d11cd-956c-4f35-9385-cd758dcfb4a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.942712908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f82dda0453d0c1c0efbedb81a78d5e78fbb8f6edf237bb54dc5bf025d7045125,PodSandboxId:e993d904fee9953a4455f47944da74a90118095ee2ee27744cd73eea16689317,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702412062916693397,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-4vgvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 976716c6-975b-488b-9939-a06dd4ef3f02,},Annotations:map[string]string{io.kubernetes.container.hash: f63c1106,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c679723542d3244bae216be0112f1c5852b1a0a404757e2581187f8b9396c8,PodSandboxId:2eab127ad6524ad8b535ccd5213c03ebcb9874505bf8e47f8a8758f96aed5555,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411924732812848,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6fa21082,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a470deaa87f231193d0089980fc4c4fc12c21fee638cc1e8dd5a8dad2c0adb98,PodSandboxId:d326ef94a54fb892f486107d85f59536203c905c41e9914a36be21297ba53e2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702411907903670062,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-8jcs8,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1caf3a5e-75f8-4d50-921e-e633246f1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 735e5090,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8bf099f94241be1241fc293aef408541917b52a7dbf372f97ff226d43389cb6,PodSandboxId:9410ed403bd04987b25e1c7ed5e62852d48040f32b8081824723aa998c91b27c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899877458298,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-flgb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 90be23ec-efcd-4df8-ba5e-a349efd8ad98,},Annotations:map[string]string{io.kubernetes.container.hash: 66017557,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99325b9f21513041aebd0fabeaf8d18a27e2f9c2b1c54b98477798ba08a0f269,PodSandboxId:75ea41d697db4e8b01812728205599dd101d0667b9f8d51e79847364c0e2a35a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899028423954,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6xv59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ca60156-df42-4943-a4e9-a768f3baf945,},Annotations:map[string]string{io.kubernetes.container.hash: 913b0fb7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5271cd7c58810c2fcadd8c90b508aaf74ead9b2c7322ebfc2902c6d39042e,PodSandboxId:4c05518114044eee9a0082475df1a085fa6166ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411890722847227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16ba4bbc1204f012c078d9c5d54af185daff7b63acaa1b5e84b18d1e30c6aaf,PodSandboxId:8192e803c353b9e789c4dd879238665b5a21887ef1f8c3d4e56dac4953351d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702411860982071617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hvcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d490df4f-815c-477e-a01d-dd2efce51bf3,},Annotations:map[string]string{io.kubernetes.container.hash: 1e792ab7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e52c6c217862f40e766c774889430b47ce17160fbc16a22d26736ab35f97fde,PodSandboxId:6828a8dac0dae038d16707b598a78267a9db80f2c060ed9f5c3e2aa73a750810,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702411860690290433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nl957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e91fc9-074a-4e05-abed-bd2754e7eb60,},Annotations:map[string]string{io.kubernetes.container.hash: cf92fc9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dc1982385247f60558bf7cd65af97e19f6e3868f1786d97610ad8146c6c33dd,PodSandboxId:4c05518114044eee9a0082475df1a085fa61
66ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702411860060298682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b248c6e6b647fce2e90a193ae2353cfa559afc4e0f9d363175b64472b0720d5d,PodSandboxId:033ff87ea4b0b199ca111bf247a0660a9ba024
4a08993c430c2027000025711c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702411835603281251,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c52ba371dbf381ce5afc5b9659dc9f72,},Annotations:map[string]string{io.kubernetes.container.hash: 30706dad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc50581f6c5409b19eb58f6c0d73cb6fb105345d63debb26747273a0ae80e91,PodSandboxId:dddfa74c102c91b6a448c24df0736752a426e2c6b7899ed2e80ad76e4aad46a7,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702411834826628525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c9628830e64f42f96ef9f2f89e484a4546c29689a5a94ee3e9566252c108c4,PodSandboxId:63c8d0733d1fa3b4006c9d0df71aa85e92682431140690c2c9f1ec84810cf151,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702411834572297912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb6e58ec6d31719782cf36b778fe348,},Annotations:map[string]string{io.kubernetes.container.hash: 52671b67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7049d321b526be4d7ebfa4a46b029ef1e7e53287266046fcff8685ec535287,PodSandboxId:6b30d5ae2e3f5f75723bba173f3b7b174e1fbf65c43eecf84d4bf0f03dc4da66,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702411834396135357,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b04d11cd-956c-4f35-9385-cd758dcfb4a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.978281480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9afe1ea7-707f-4b2e-befc-d02555a7cbb6 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.978341845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9afe1ea7-707f-4b2e-befc-d02555a7cbb6 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.980013023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=994e7303-0560-4c16-babc-d4087ee7247c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.980476648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412081980442042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=994e7303-0560-4c16-babc-d4087ee7247c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.981164982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=af485349-d0bb-4101-a08c-68e205edd3c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.981209150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=af485349-d0bb-4101-a08c-68e205edd3c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.981461829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f82dda0453d0c1c0efbedb81a78d5e78fbb8f6edf237bb54dc5bf025d7045125,PodSandboxId:e993d904fee9953a4455f47944da74a90118095ee2ee27744cd73eea16689317,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702412062916693397,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-4vgvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 976716c6-975b-488b-9939-a06dd4ef3f02,},Annotations:map[string]string{io.kubernetes.container.hash: f63c1106,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c679723542d3244bae216be0112f1c5852b1a0a404757e2581187f8b9396c8,PodSandboxId:2eab127ad6524ad8b535ccd5213c03ebcb9874505bf8e47f8a8758f96aed5555,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411924732812848,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6fa21082,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a470deaa87f231193d0089980fc4c4fc12c21fee638cc1e8dd5a8dad2c0adb98,PodSandboxId:d326ef94a54fb892f486107d85f59536203c905c41e9914a36be21297ba53e2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702411907903670062,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-8jcs8,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1caf3a5e-75f8-4d50-921e-e633246f1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 735e5090,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8bf099f94241be1241fc293aef408541917b52a7dbf372f97ff226d43389cb6,PodSandboxId:9410ed403bd04987b25e1c7ed5e62852d48040f32b8081824723aa998c91b27c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899877458298,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-flgb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 90be23ec-efcd-4df8-ba5e-a349efd8ad98,},Annotations:map[string]string{io.kubernetes.container.hash: 66017557,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99325b9f21513041aebd0fabeaf8d18a27e2f9c2b1c54b98477798ba08a0f269,PodSandboxId:75ea41d697db4e8b01812728205599dd101d0667b9f8d51e79847364c0e2a35a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899028423954,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6xv59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ca60156-df42-4943-a4e9-a768f3baf945,},Annotations:map[string]string{io.kubernetes.container.hash: 913b0fb7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5271cd7c58810c2fcadd8c90b508aaf74ead9b2c7322ebfc2902c6d39042e,PodSandboxId:4c05518114044eee9a0082475df1a085fa6166ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411890722847227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16ba4bbc1204f012c078d9c5d54af185daff7b63acaa1b5e84b18d1e30c6aaf,PodSandboxId:8192e803c353b9e789c4dd879238665b5a21887ef1f8c3d4e56dac4953351d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702411860982071617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hvcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d490df4f-815c-477e-a01d-dd2efce51bf3,},Annotations:map[string]string{io.kubernetes.container.hash: 1e792ab7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e52c6c217862f40e766c774889430b47ce17160fbc16a22d26736ab35f97fde,PodSandboxId:6828a8dac0dae038d16707b598a78267a9db80f2c060ed9f5c3e2aa73a750810,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702411860690290433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nl957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e91fc9-074a-4e05-abed-bd2754e7eb60,},Annotations:map[string]string{io.kubernetes.container.hash: cf92fc9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dc1982385247f60558bf7cd65af97e19f6e3868f1786d97610ad8146c6c33dd,PodSandboxId:4c05518114044eee9a0082475df1a085fa61
66ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702411860060298682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b248c6e6b647fce2e90a193ae2353cfa559afc4e0f9d363175b64472b0720d5d,PodSandboxId:033ff87ea4b0b199ca111bf247a0660a9ba024
4a08993c430c2027000025711c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702411835603281251,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c52ba371dbf381ce5afc5b9659dc9f72,},Annotations:map[string]string{io.kubernetes.container.hash: 30706dad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc50581f6c5409b19eb58f6c0d73cb6fb105345d63debb26747273a0ae80e91,PodSandboxId:dddfa74c102c91b6a448c24df0736752a426e2c6b7899ed2e80ad76e4aad46a7,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702411834826628525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c9628830e64f42f96ef9f2f89e484a4546c29689a5a94ee3e9566252c108c4,PodSandboxId:63c8d0733d1fa3b4006c9d0df71aa85e92682431140690c2c9f1ec84810cf151,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702411834572297912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb6e58ec6d31719782cf36b778fe348,},Annotations:map[string]string{io.kubernetes.container.hash: 52671b67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7049d321b526be4d7ebfa4a46b029ef1e7e53287266046fcff8685ec535287,PodSandboxId:6b30d5ae2e3f5f75723bba173f3b7b174e1fbf65c43eecf84d4bf0f03dc4da66,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702411834396135357,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=af485349-d0bb-4101-a08c-68e205edd3c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.997309240Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=e217f78c-66e4-46d0-a95b-a4ffbdaeea8a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.997775512Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e993d904fee9953a4455f47944da74a90118095ee2ee27744cd73eea16689317,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-4vgvr,Uid:976716c6-975b-488b-9939-a06dd4ef3f02,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702412060224475438,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-4vgvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 976716c6-975b-488b-9939-a06dd4ef3f02,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:14:19.877642665Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2eab127ad6524ad8b535ccd5213c03ebcb9874505bf8e47f8a8758f96aed5555,Metadata:&PodSandboxMetadata{Name:nginx,Uid:fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411922010839061,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:12:01.668098262Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1cc58262911980e2fddc60b73dcfc4afa8c25b650a86836dec21b3d42dd49e2,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:13d32b74-827d-43b9-92d3-9ce86a78e7c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702411910717058472,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d32b74-827d-43b9-92d3-9ce86a78e7c3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-12-12T20:11:48.866819188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d326ef94a54fb892f486107d85f59536203c905c41e9914a36be21297ba53e2f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-8jcs8,Uid:1caf3a5e-75f8-4d50-921e
-e633246f1d9b,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702411900450944384,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-8jcs8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1caf3a5e-75f8-4d50-921e-e633246f1d9b,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:11:35.580478825Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:75ea41d697db4e8b01812728205599dd101d0667b9f8d51e79847364c0e2a35a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-6xv59,Uid:4ca60156-df42-4943-a4e9-a768f3baf945,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702411897445508669,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/in
stance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 93e4c905-6e50-4fc0-9930-83d587156b88,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-6xv59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ca60156-df42-4943-a4e9-a768f3baf945,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:11:35.581506702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9410ed403bd04987b25e1c7ed5e62852d48040f32b8081824723aa998c91b27c,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-flgb6,Uid:90be23ec-efcd-4df8-ba5e-a349efd8ad98,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702411896983737299,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 16803d35-3f10-47bd-8682-63500552c7ad,io.kubernetes.container.name: POD,io.kuberne
tes.pod.name: ingress-nginx-admission-patch-flgb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 90be23ec-efcd-4df8-ba5e-a349efd8ad98,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:11:35.743738768Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6828a8dac0dae038d16707b598a78267a9db80f2c060ed9f5c3e2aa73a750810,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-nl957,Uid:e1e91fc9-074a-4e05-abed-bd2754e7eb60,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411860362574886,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bff467f8-nl957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e91fc9-074a-4e05-abed-bd2754e7eb60,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:10:58.498311355Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8192e803c
353b9e789c4dd879238665b5a21887ef1f8c3d4e56dac4953351d25,Metadata:&PodSandboxMetadata{Name:kube-proxy-2hvcc,Uid:d490df4f-815c-477e-a01d-dd2efce51bf3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411860173007710,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2hvcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d490df4f-815c-477e-a01d-dd2efce51bf3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T20:10:58.330684588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c05518114044eee9a0082475df1a085fa6166ffd8c2554acc287d40cef22f7b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:440edbd5-4307-47b9-a5ef-29534be17485,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411859727036741,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner
,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-12T20:10:59.369350642Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:dddfa74c102c91b6a448c24df0736752a426e2c6b7899ed2e80ad76e4aad46a7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-435457,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411834018423339,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2023-12-12T20:10:32.691723987Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b30d5ae2e3f5f75723bba173f3b7b174e1fbf65c43eecf84d4bf0f03dc4da66,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-435457,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1702411834008359172,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-12-12T20:10:32.691722478Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:033ff87ea4b0b199ca111bf247a0660a9ba0244a08993c430c2027000025711c,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-435457,Uid:c52ba371dbf381ce5afc5b9659dc9f72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411833969400301,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c52ba371dbf381ce5afc5b9659dc9f72,tier: contr
ol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.34:2379,kubernetes.io/config.hash: c52ba371dbf381ce5afc5b9659dc9f72,kubernetes.io/config.seen: 2023-12-12T20:10:32.691713920Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:63c8d0733d1fa3b4006c9d0df71aa85e92682431140690c2c9f1ec84810cf151,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-435457,Uid:3eb6e58ec6d31719782cf36b778fe348,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702411833934441942,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb6e58ec6d31719782cf36b778fe348,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.34:8443,kubernetes.io/config.hash: 3eb6e58ec6d31719782cf36b778fe348,kubernetes.
io/config.seen: 2023-12-12T20:10:32.691720555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e217f78c-66e4-46d0-a95b-a4ffbdaeea8a name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.998672705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cdc948b9-7d92-4718-a8b6-a8e09059dc20 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.998752565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cdc948b9-7d92-4718-a8b6-a8e09059dc20 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 12 20:14:41 ingress-addon-legacy-435457 crio[721]: time="2023-12-12 20:14:41.999037829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f82dda0453d0c1c0efbedb81a78d5e78fbb8f6edf237bb54dc5bf025d7045125,PodSandboxId:e993d904fee9953a4455f47944da74a90118095ee2ee27744cd73eea16689317,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702412062916693397,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-4vgvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 976716c6-975b-488b-9939-a06dd4ef3f02,},Annotations:map[string]string{io.kubernetes.container.hash: f63c1106,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c679723542d3244bae216be0112f1c5852b1a0a404757e2581187f8b9396c8,PodSandboxId:2eab127ad6524ad8b535ccd5213c03ebcb9874505bf8e47f8a8758f96aed5555,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702411924732812848,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6fa21082,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a470deaa87f231193d0089980fc4c4fc12c21fee638cc1e8dd5a8dad2c0adb98,PodSandboxId:d326ef94a54fb892f486107d85f59536203c905c41e9914a36be21297ba53e2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702411907903670062,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-8jcs8,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1caf3a5e-75f8-4d50-921e-e633246f1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 735e5090,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8bf099f94241be1241fc293aef408541917b52a7dbf372f97ff226d43389cb6,PodSandboxId:9410ed403bd04987b25e1c7ed5e62852d48040f32b8081824723aa998c91b27c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899877458298,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-flgb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 90be23ec-efcd-4df8-ba5e-a349efd8ad98,},Annotations:map[string]string{io.kubernetes.container.hash: 66017557,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99325b9f21513041aebd0fabeaf8d18a27e2f9c2b1c54b98477798ba08a0f269,PodSandboxId:75ea41d697db4e8b01812728205599dd101d0667b9f8d51e79847364c0e2a35a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702411899028423954,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6xv59,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ca60156-df42-4943-a4e9-a768f3baf945,},Annotations:map[string]string{io.kubernetes.container.hash: 913b0fb7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5271cd7c58810c2fcadd8c90b508aaf74ead9b2c7322ebfc2902c6d39042e,PodSandboxId:4c05518114044eee9a0082475df1a085fa6166ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702411890722847227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16ba4bbc1204f012c078d9c5d54af185daff7b63acaa1b5e84b18d1e30c6aaf,PodSandboxId:8192e803c353b9e789c4dd879238665b5a21887ef1f8c3d4e56dac4953351d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702411860982071617,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2hvcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d490df4f-815c-477e-a01d-dd2efce51bf3,},Annotations:map[string]string{io.kubernetes.container.hash: 1e792ab7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e52c6c217862f40e766c774889430b47ce17160fbc16a22d26736ab35f97fde,PodSandboxId:6828a8dac0dae038d16707b598a78267a9db80f2c060ed9f5c3e2aa73a750810,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702411860690290433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nl957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e91fc9-074a-4e05-abed-bd2754e7eb60,},Annotations:map[string]string{io.kubernetes.container.hash: cf92fc9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dc1982385247f60558bf7cd65af97e19f6e3868f1786d97610ad8146c6c33dd,PodSandboxId:4c05518114044eee9a0082475df1a085fa61
66ffd8c2554acc287d40cef22f7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702411860060298682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440edbd5-4307-47b9-a5ef-29534be17485,},Annotations:map[string]string{io.kubernetes.container.hash: fd18a1a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b248c6e6b647fce2e90a193ae2353cfa559afc4e0f9d363175b64472b0720d5d,PodSandboxId:033ff87ea4b0b199ca111bf247a0660a9ba024
4a08993c430c2027000025711c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702411835603281251,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c52ba371dbf381ce5afc5b9659dc9f72,},Annotations:map[string]string{io.kubernetes.container.hash: 30706dad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc50581f6c5409b19eb58f6c0d73cb6fb105345d63debb26747273a0ae80e91,PodSandboxId:dddfa74c102c91b6a448c24df0736752a426e2c6b7899ed2e80ad76e4aad46a7,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702411834826628525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c9628830e64f42f96ef9f2f89e484a4546c29689a5a94ee3e9566252c108c4,PodSandboxId:63c8d0733d1fa3b4006c9d0df71aa85e92682431140690c2c9f1ec84810cf151,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702411834572297912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb6e58ec6d31719782cf36b778fe348,},Annotations:map[string]string{io.kubernetes.container.hash: 52671b67,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7049d321b526be4d7ebfa4a46b029ef1e7e53287266046fcff8685ec535287,PodSandboxId:6b30d5ae2e3f5f75723bba173f3b7b174e1fbf65c43eecf84d4bf0f03dc4da66,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702411834396135357,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-435457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cdc948b9-7d92-4718-a8b6-a8e09059dc20 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f82dda0453d0c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            19 seconds ago      Running             hello-world-app           0                   e993d904fee99       hello-world-app-5f5d8b66bb-4vgvr
	99c679723542d       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   2eab127ad6524       nginx
	a470deaa87f23       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   d326ef94a54fb       ingress-nginx-controller-7fcf777cb7-8jcs8
	d8bf099f94241       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                   3 minutes ago       Exited              patch                     1                   9410ed403bd04       ingress-nginx-admission-patch-flgb6
	99325b9f21513       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   75ea41d697db4       ingress-nginx-admission-create-6xv59
	b9c5271cd7c58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   4c05518114044       storage-provisioner
	e16ba4bbc1204       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   8192e803c353b       kube-proxy-2hvcc
	4e52c6c217862       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   6828a8dac0dae       coredns-66bff467f8-nl957
	3dc1982385247       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   4c05518114044       storage-provisioner
	b248c6e6b647f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   033ff87ea4b0b       etcd-ingress-addon-legacy-435457
	0bc50581f6c54       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   dddfa74c102c9       kube-scheduler-ingress-addon-legacy-435457
	40c9628830e64       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   63c8d0733d1fa       kube-apiserver-ingress-addon-legacy-435457
	8b7049d321b52       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   6b30d5ae2e3f5       kube-controller-manager-ingress-addon-legacy-435457
	
	
	==> coredns [4e52c6c217862f40e766c774889430b47ce17160fbc16a22d26736ab35f97fde] <==
	[INFO] 10.244.0.6:48396 - 59815 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000110776s
	[INFO] 10.244.0.6:48396 - 44605 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101793s
	[INFO] 10.244.0.6:48396 - 56841 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007734s
	[INFO] 10.244.0.6:48396 - 4169 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109237s
	[INFO] 10.244.0.6:37919 - 30667 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109112s
	[INFO] 10.244.0.6:37919 - 468 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000055724s
	[INFO] 10.244.0.6:37919 - 58312 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071859s
	[INFO] 10.244.0.6:37919 - 59196 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065075s
	[INFO] 10.244.0.6:37919 - 64921 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067794s
	[INFO] 10.244.0.6:37919 - 22388 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038063s
	[INFO] 10.244.0.6:37919 - 9125 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073768s
	[INFO] 10.244.0.6:43610 - 13092 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090111s
	[INFO] 10.244.0.6:58911 - 40081 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078035s
	[INFO] 10.244.0.6:43610 - 1703 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028856s
	[INFO] 10.244.0.6:43610 - 14487 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000029425s
	[INFO] 10.244.0.6:58911 - 7447 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000022122s
	[INFO] 10.244.0.6:43610 - 51174 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040092s
	[INFO] 10.244.0.6:58911 - 44338 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033972s
	[INFO] 10.244.0.6:43610 - 11369 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000145532s
	[INFO] 10.244.0.6:58911 - 46808 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023173s
	[INFO] 10.244.0.6:43610 - 21678 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024446s
	[INFO] 10.244.0.6:58911 - 37356 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027972s
	[INFO] 10.244.0.6:43610 - 7185 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036626s
	[INFO] 10.244.0.6:58911 - 43214 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029941s
	[INFO] 10.244.0.6:58911 - 6253 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112362s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-435457
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-435457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=ingress-addon-legacy-435457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T20_10_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-435457
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:12:13 +0000   Tue, 12 Dec 2023 20:10:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:12:13 +0000   Tue, 12 Dec 2023 20:10:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:12:13 +0000   Tue, 12 Dec 2023 20:10:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:12:13 +0000   Tue, 12 Dec 2023 20:10:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    ingress-addon-legacy-435457
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 15a17b1800894fec9950af708eddf1fd
	  System UUID:                15a17b18-0089-4fec-9950-af708eddf1fd
	  Boot ID:                    1920ea25-d57d-4229-bce6-e0f59a961542
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-4vgvr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-66bff467f8-nl957                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-435457                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-435457             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-435457    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-2hvcc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-435457             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m59s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s  kubelet     Node ingress-addon-legacy-435457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s  kubelet     Node ingress-addon-legacy-435457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s  kubelet     Node ingress-addon-legacy-435457 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m59s  kubelet     Node ingress-addon-legacy-435457 status is now: NodeReady
	  Normal  Starting                 3m41s  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec12 20:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093698] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.414722] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.572935] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154503] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.050611] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.165210] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.118255] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.163433] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.114431] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.216765] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[  +7.792528] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +3.592267] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.744514] systemd-fstab-generator[1436]: Ignoring "noauto" for root device
	[ +17.605672] kauditd_printk_skb: 6 callbacks suppressed
	[Dec12 20:11] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.806214] kauditd_printk_skb: 6 callbacks suppressed
	[Dec12 20:12] kauditd_printk_skb: 7 callbacks suppressed
	[Dec12 20:14] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.128363] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [b248c6e6b647fce2e90a193ae2353cfa559afc4e0f9d363175b64472b0720d5d] <==
	raft2023/12/12 20:10:35 INFO: 6c39268f2da6496d switched to configuration voters=(7798306626156775789)
	2023-12-12 20:10:35.765601 W | auth: simple token is not cryptographically signed
	2023-12-12 20:10:35.771954 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-12 20:10:35.773956 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 20:10:35.774140 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 20:10:35.774612 I | etcdserver: 6c39268f2da6496d as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-12 20:10:35.774959 I | embed: listening for peers on 192.168.39.34:2380
	raft2023/12/12 20:10:35 INFO: 6c39268f2da6496d switched to configuration voters=(7798306626156775789)
	2023-12-12 20:10:35.775499 I | etcdserver/membership: added member 6c39268f2da6496d [https://192.168.39.34:2380] to cluster c5b11fc56322ab9a
	raft2023/12/12 20:10:36 INFO: 6c39268f2da6496d is starting a new election at term 1
	raft2023/12/12 20:10:36 INFO: 6c39268f2da6496d became candidate at term 2
	raft2023/12/12 20:10:36 INFO: 6c39268f2da6496d received MsgVoteResp from 6c39268f2da6496d at term 2
	raft2023/12/12 20:10:36 INFO: 6c39268f2da6496d became leader at term 2
	raft2023/12/12 20:10:36 INFO: raft.node: 6c39268f2da6496d elected leader 6c39268f2da6496d at term 2
	2023-12-12 20:10:36.558212 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 20:10:36.558640 I | etcdserver: published {Name:ingress-addon-legacy-435457 ClientURLs:[https://192.168.39.34:2379]} to cluster c5b11fc56322ab9a
	2023-12-12 20:10:36.558956 I | embed: ready to serve client requests
	2023-12-12 20:10:36.559169 I | embed: ready to serve client requests
	2023-12-12 20:10:36.560664 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 20:10:36.564624 I | embed: serving client requests on 192.168.39.34:2379
	2023-12-12 20:10:36.574346 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 20:10:36.574455 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 20:10:57.960181 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" " with result "range_response_count:1 size:218" took too long (487.392113ms) to execute
	2023-12-12 20:10:59.093811 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-435457\" " with result "range_response_count:1 size:6295" took too long (103.769389ms) to execute
	2023-12-12 20:12:11.343711 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (131.153658ms) to execute
	
	
	==> kernel <==
	 20:14:42 up 4 min,  0 users,  load average: 0.24, 0.43, 0.21
	Linux ingress-addon-legacy-435457 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [40c9628830e64f42f96ef9f2f89e484a4546c29689a5a94ee3e9566252c108c4] <==
	I1212 20:10:39.681751       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 20:10:39.682461       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:10:39.682576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:10:39.682592       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:10:39.712620       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 20:10:40.578432       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 20:10:40.578671       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 20:10:40.603977       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 20:10:40.610869       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:10:40.610962       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 20:10:41.112031       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:10:41.157929       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 20:10:41.234032       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.34]
	I1212 20:10:41.234950       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 20:10:41.242834       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:10:41.945988       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1212 20:10:42.945439       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 20:10:43.083148       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 20:10:43.610908       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:10:58.299133       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 20:10:58.327633       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 20:11:35.516269       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1212 20:12:01.485182       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1212 20:14:34.476380       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1212 20:14:36.123353       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [8b7049d321b526be4d7ebfa4a46b029ef1e7e53287266046fcff8685ec535287] <==
	I1212 20:10:58.364682       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ec09efc5-66ac-4061-8945-80b9e906ee2e", APIVersion:"apps/v1", ResourceVersion:"193", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1212 20:10:58.371305       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1212 20:10:58.373430       1 shared_informer.go:230] Caches are synced for PV protection 
	I1212 20:10:58.391634       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc31d398-5de1-4599-b351-0f5898bc091a", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-n5s5z
	I1212 20:10:58.402925       1 shared_informer.go:230] Caches are synced for expand 
	I1212 20:10:58.428283       1 shared_informer.go:230] Caches are synced for attach detach 
	I1212 20:10:58.454366       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc31d398-5de1-4599-b351-0f5898bc091a", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-nl957
	I1212 20:10:58.472918       1 shared_informer.go:230] Caches are synced for job 
	I1212 20:10:58.648895       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ec09efc5-66ac-4061-8945-80b9e906ee2e", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1212 20:10:58.673800       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 20:10:58.683628       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 20:10:58.683701       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 20:10:58.701844       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 20:10:58.734352       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc31d398-5de1-4599-b351-0f5898bc091a", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-n5s5z
	I1212 20:10:58.823178       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1212 20:10:58.823389       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 20:11:35.494462       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9b0390ee-be06-4e96-8836-f0c043c1e355", APIVersion:"apps/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 20:11:35.550325       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"93e4c905-6e50-4fc0-9930-83d587156b88", APIVersion:"batch/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-6xv59
	I1212 20:11:35.551329       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"81046712-2431-4b4b-9d22-70d13eef0415", APIVersion:"apps/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-8jcs8
	I1212 20:11:35.658828       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"16803d35-3f10-47bd-8682-63500552c7ad", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-flgb6
	I1212 20:11:39.883458       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"93e4c905-6e50-4fc0-9930-83d587156b88", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 20:11:40.877440       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"16803d35-3f10-47bd-8682-63500552c7ad", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 20:14:19.838300       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f6aebffe-9949-4ad6-aaec-3c921cea9d1a", APIVersion:"apps/v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1212 20:14:19.867195       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"566997cd-5156-4049-a309-b536f94627e2", APIVersion:"apps/v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-4vgvr
	E1212 20:14:39.250749       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-dd5vn" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [e16ba4bbc1204f012c078d9c5d54af185daff7b63acaa1b5e84b18d1e30c6aaf] <==
	W1212 20:11:01.162890       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 20:11:01.170997       1 node.go:136] Successfully retrieved node IP: 192.168.39.34
	I1212 20:11:01.171122       1 server_others.go:186] Using iptables Proxier.
	I1212 20:11:01.171362       1 server.go:583] Version: v1.18.20
	I1212 20:11:01.175789       1 config.go:315] Starting service config controller
	I1212 20:11:01.175833       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 20:11:01.175860       1 config.go:133] Starting endpoints config controller
	I1212 20:11:01.175867       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 20:11:01.277783       1 shared_informer.go:230] Caches are synced for service config 
	I1212 20:11:01.277839       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [0bc50581f6c5409b19eb58f6c0d73cb6fb105345d63debb26747273a0ae80e91] <==
	I1212 20:10:39.709593       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:10:39.709761       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1212 20:10:39.712230       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 20:10:39.714382       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 20:10:39.714431       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 20:10:39.718192       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:10:39.718346       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 20:10:39.718460       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 20:10:39.718608       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 20:10:39.720952       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 20:10:39.721080       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:10:39.721170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 20:10:39.721236       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 20:10:39.723707       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 20:10:39.723923       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 20:10:39.727819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 20:10:40.549606       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 20:10:40.574327       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 20:10:40.621955       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 20:10:40.632239       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:10:40.651597       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:10:40.675100       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 20:10:40.837091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 20:10:40.852000       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1212 20:10:43.210045       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:10:09 UTC, ends at Tue 2023-12-12 20:14:42 UTC. --
	Dec 12 20:11:48 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:11:48.867148    1443 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 20:11:48 ingress-addon-legacy-435457 kubelet[1443]: E1212 20:11:48.870239    1443 reflector.go:178] object-"kube-system"/"minikube-ingress-dns-token-bdqdv": Failed to list *v1.Secret: secrets "minikube-ingress-dns-token-bdqdv" is forbidden: User "system:node:ingress-addon-legacy-435457" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "ingress-addon-legacy-435457" and this object
	Dec 12 20:11:48 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:11:48.909874    1443 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-bdqdv" (UniqueName: "kubernetes.io/secret/13d32b74-827d-43b9-92d3-9ce86a78e7c3-minikube-ingress-dns-token-bdqdv") pod "kube-ingress-dns-minikube" (UID: "13d32b74-827d-43b9-92d3-9ce86a78e7c3")
	Dec 12 20:11:50 ingress-addon-legacy-435457 kubelet[1443]: E1212 20:11:50.010752    1443 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-bdqdv: failed to sync secret cache: timed out waiting for the condition
	Dec 12 20:11:50 ingress-addon-legacy-435457 kubelet[1443]: E1212 20:11:50.010939    1443 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/13d32b74-827d-43b9-92d3-9ce86a78e7c3-minikube-ingress-dns-token-bdqdv podName:13d32b74-827d-43b9-92d3-9ce86a78e7c3 nodeName:}" failed. No retries permitted until 2023-12-12 20:11:50.510907858 +0000 UTC m=+67.627719991 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-bdqdv\" (UniqueName: \"kubernetes.io/secret/13d32b74-827d-43b9-92d3-9ce86a78e7c3-minikube-ingress-dns-token-bdqdv\") pod \"kube-ingress-dns-minikube\" (UID: \"13d32b74-827d-43b9-92d3-9ce86a78e7c3\") : failed to sync secret cache: timed out waiting for the condition"
	Dec 12 20:12:01 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:12:01.668821    1443 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 20:12:01 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:12:01.754316    1443 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jvjp9" (UniqueName: "kubernetes.io/secret/fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b-default-token-jvjp9") pod "nginx" (UID: "fc23fdf7-6d78-4ba1-8c7c-d49ec558f44b")
	Dec 12 20:14:19 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:19.878695    1443 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 20:14:20 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:20.018609    1443 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jvjp9" (UniqueName: "kubernetes.io/secret/976716c6-975b-488b-9939-a06dd4ef3f02-default-token-jvjp9") pod "hello-world-app-5f5d8b66bb-4vgvr" (UID: "976716c6-975b-488b-9939-a06dd4ef3f02")
	Dec 12 20:14:21 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:21.929061    1443 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b6509688f81dd1a5189df209fb713261ba815a3cd23f86bbc03e20a1f900aaf3
	Dec 12 20:14:22 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:22.025011    1443 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-bdqdv" (UniqueName: "kubernetes.io/secret/13d32b74-827d-43b9-92d3-9ce86a78e7c3-minikube-ingress-dns-token-bdqdv") pod "13d32b74-827d-43b9-92d3-9ce86a78e7c3" (UID: "13d32b74-827d-43b9-92d3-9ce86a78e7c3")
	Dec 12 20:14:22 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:22.035912    1443 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d32b74-827d-43b9-92d3-9ce86a78e7c3-minikube-ingress-dns-token-bdqdv" (OuterVolumeSpecName: "minikube-ingress-dns-token-bdqdv") pod "13d32b74-827d-43b9-92d3-9ce86a78e7c3" (UID: "13d32b74-827d-43b9-92d3-9ce86a78e7c3"). InnerVolumeSpecName "minikube-ingress-dns-token-bdqdv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 20:14:22 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:22.109155    1443 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b6509688f81dd1a5189df209fb713261ba815a3cd23f86bbc03e20a1f900aaf3
	Dec 12 20:14:22 ingress-addon-legacy-435457 kubelet[1443]: E1212 20:14:22.109785    1443 remote_runtime.go:295] ContainerStatus "b6509688f81dd1a5189df209fb713261ba815a3cd23f86bbc03e20a1f900aaf3" from runtime service failed: rpc error: code = NotFound desc = could not find container "b6509688f81dd1a5189df209fb713261ba815a3cd23f86bbc03e20a1f900aaf3": container with ID starting with b6509688f81dd1a5189df209fb713261ba815a3cd23f86bbc03e20a1f900aaf3 not found: ID does not exist
	Dec 12 20:14:22 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:22.125373    1443 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-bdqdv" (UniqueName: "kubernetes.io/secret/13d32b74-827d-43b9-92d3-9ce86a78e7c3-minikube-ingress-dns-token-bdqdv") on node "ingress-addon-legacy-435457" DevicePath ""
	Dec 12 20:14:34 ingress-addon-legacy-435457 kubelet[1443]: E1212 20:14:34.455879    1443 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8jcs8.17a02ec30f0cd27c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8jcs8", UID:"1caf3a5e-75f8-4d50-921e-e633246f1d9b", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-435457"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1564daa9af1ae7c, ext:231568858581, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1564daa9af1ae7c, ext:231568858581, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8jcs8.17a02ec30f0cd27c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 20:14:34 ingress-addon-legacy-435457 kubelet[1443]: E1212 20:14:34.468845    1443 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8jcs8.17a02ec30f0cd27c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8jcs8", UID:"1caf3a5e-75f8-4d50-921e-e633246f1d9b", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-435457"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1564daa9af1ae7c, ext:231568858581, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1564daa9bb39ee9, ext:231581568577, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8jcs8.17a02ec30f0cd27c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 20:14:36 ingress-addon-legacy-435457 kubelet[1443]: W1212 20:14:36.984685    1443 pod_container_deletor.go:77] Container "d326ef94a54fb892f486107d85f59536203c905c41e9914a36be21297ba53e2f" not found in pod's containers
	Dec 12 20:14:38 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:38.588217    1443 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1caf3a5e-75f8-4d50-921e-e633246f1d9b-webhook-cert") pod "1caf3a5e-75f8-4d50-921e-e633246f1d9b" (UID: "1caf3a5e-75f8-4d50-921e-e633246f1d9b")
	Dec 12 20:14:38 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:38.588269    1443 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-lpmll" (UniqueName: "kubernetes.io/secret/1caf3a5e-75f8-4d50-921e-e633246f1d9b-ingress-nginx-token-lpmll") pod "1caf3a5e-75f8-4d50-921e-e633246f1d9b" (UID: "1caf3a5e-75f8-4d50-921e-e633246f1d9b")
	Dec 12 20:14:38 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:38.592904    1443 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caf3a5e-75f8-4d50-921e-e633246f1d9b-ingress-nginx-token-lpmll" (OuterVolumeSpecName: "ingress-nginx-token-lpmll") pod "1caf3a5e-75f8-4d50-921e-e633246f1d9b" (UID: "1caf3a5e-75f8-4d50-921e-e633246f1d9b"). InnerVolumeSpecName "ingress-nginx-token-lpmll". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 20:14:38 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:38.593189    1443 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caf3a5e-75f8-4d50-921e-e633246f1d9b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1caf3a5e-75f8-4d50-921e-e633246f1d9b" (UID: "1caf3a5e-75f8-4d50-921e-e633246f1d9b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 20:14:38 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:38.688635    1443 reconciler.go:319] Volume detached for volume "ingress-nginx-token-lpmll" (UniqueName: "kubernetes.io/secret/1caf3a5e-75f8-4d50-921e-e633246f1d9b-ingress-nginx-token-lpmll") on node "ingress-addon-legacy-435457" DevicePath ""
	Dec 12 20:14:38 ingress-addon-legacy-435457 kubelet[1443]: I1212 20:14:38.688665    1443 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1caf3a5e-75f8-4d50-921e-e633246f1d9b-webhook-cert") on node "ingress-addon-legacy-435457" DevicePath ""
	Dec 12 20:14:39 ingress-addon-legacy-435457 kubelet[1443]: W1212 20:14:39.532645    1443 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1caf3a5e-75f8-4d50-921e-e633246f1d9b/volumes" does not exist
	
	
	==> storage-provisioner [3dc1982385247f60558bf7cd65af97e19f6e3868f1786d97610ad8146c6c33dd] <==
	I1212 20:11:00.202170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:11:30.205774       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9c5271cd7c58810c2fcadd8c90b508aaf74ead9b2c7322ebfc2902c6d39042e] <==
	I1212 20:11:30.864418       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:11:30.888375       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:11:30.888655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:11:30.897832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:11:30.898470       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-435457_99294c67-e5a5-444d-88a3-5a2eafe01b9b!
	I1212 20:11:30.897933       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bdeb5da7-6375-4616-b5fe-2faf389730ec", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-435457_99294c67-e5a5-444d-88a3-5a2eafe01b9b became leader
	I1212 20:11:30.999018       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-435457_99294c67-e5a5-444d-88a3-5a2eafe01b9b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-435457 -n ingress-addon-legacy-435457
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-435457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (174.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-9wvsx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-9wvsx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-9wvsx -- sh -c "ping -c 1 192.168.39.1": exit status 1 (198.719517ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-9wvsx): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-vbpn5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-vbpn5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-vbpn5 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (197.175754ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-vbpn5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-562818 -n multinode-562818
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-562818 logs -n 25: (1.359815144s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-600279 ssh -- ls                    | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-600279 ssh --                       | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-600279                           | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	| start   | -p mount-start-2-600279                           | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC |                     |
	|         | --profile mount-start-2-600279                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-600279 ssh -- ls                    | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-600279 ssh --                       | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-600279                           | mount-start-2-600279 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	| delete  | -p mount-start-1-581866                           | mount-start-1-581866 | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:18 UTC |
	| start   | -p multinode-562818                               | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:18 UTC | 12 Dec 23 20:20 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- apply -f                   | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- rollout                    | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- get pods -o                | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- get pods -o                | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-9wvsx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-vbpn5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-9wvsx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-vbpn5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-9wvsx -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-vbpn5 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- get pods -o                | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-9wvsx                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC |                     |
	|         | busybox-5bc68d56bd-9wvsx -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC | 12 Dec 23 20:20 UTC |
	|         | busybox-5bc68d56bd-vbpn5                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-562818 -- exec                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:20 UTC |                     |
	|         | busybox-5bc68d56bd-vbpn5 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 20:18:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:18:50.888417   29681 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:18:50.888559   29681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:18:50.888569   29681 out.go:309] Setting ErrFile to fd 2...
	I1212 20:18:50.888574   29681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:18:50.888774   29681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:18:50.889354   29681 out.go:303] Setting JSON to false
	I1212 20:18:50.890250   29681 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3685,"bootTime":1702408646,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:18:50.890311   29681 start.go:138] virtualization: kvm guest
	I1212 20:18:50.892668   29681 out.go:177] * [multinode-562818] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:18:50.894204   29681 notify.go:220] Checking for updates...
	I1212 20:18:50.894210   29681 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:18:50.895714   29681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:18:50.897266   29681 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:18:50.898765   29681 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:18:50.900437   29681 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:18:50.901824   29681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:18:50.903432   29681 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:18:50.937899   29681 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 20:18:50.939486   29681 start.go:298] selected driver: kvm2
	I1212 20:18:50.939505   29681 start.go:902] validating driver "kvm2" against <nil>
	I1212 20:18:50.939516   29681 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:18:50.940380   29681 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:18:50.940460   29681 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 20:18:50.955006   29681 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 20:18:50.955056   29681 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 20:18:50.955296   29681 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:18:50.955355   29681 cni.go:84] Creating CNI manager for ""
	I1212 20:18:50.955368   29681 cni.go:136] 0 nodes found, recommending kindnet
	I1212 20:18:50.955375   29681 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:18:50.955387   29681 start_flags.go:323] config:
	{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:18:50.955533   29681 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:18:50.957325   29681 out.go:177] * Starting control plane node multinode-562818 in cluster multinode-562818
	I1212 20:18:50.958614   29681 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:18:50.958647   29681 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 20:18:50.958660   29681 cache.go:56] Caching tarball of preloaded images
	I1212 20:18:50.958727   29681 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:18:50.958737   29681 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 20:18:50.959044   29681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:18:50.959068   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json: {Name:mk33b75d8d74469ef7b2f984c9507939ebd8ac15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:18:50.959200   29681 start.go:365] acquiring machines lock for multinode-562818: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:18:50.959227   29681 start.go:369] acquired machines lock for "multinode-562818" in 14.806µs
	I1212 20:18:50.959257   29681 start.go:93] Provisioning new machine with config: &{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:18:50.959344   29681 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 20:18:50.961034   29681 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 20:18:50.961216   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:18:50.961257   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:18:50.975045   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I1212 20:18:50.975542   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:18:50.976105   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:18:50.976126   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:18:50.976441   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:18:50.976629   29681 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:18:50.976781   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:18:50.976949   29681 start.go:159] libmachine.API.Create for "multinode-562818" (driver="kvm2")
	I1212 20:18:50.976978   29681 client.go:168] LocalClient.Create starting
	I1212 20:18:50.977009   29681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem
	I1212 20:18:50.977059   29681 main.go:141] libmachine: Decoding PEM data...
	I1212 20:18:50.977078   29681 main.go:141] libmachine: Parsing certificate...
	I1212 20:18:50.977141   29681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem
	I1212 20:18:50.977193   29681 main.go:141] libmachine: Decoding PEM data...
	I1212 20:18:50.977218   29681 main.go:141] libmachine: Parsing certificate...
	I1212 20:18:50.977244   29681 main.go:141] libmachine: Running pre-create checks...
	I1212 20:18:50.977257   29681 main.go:141] libmachine: (multinode-562818) Calling .PreCreateCheck
	I1212 20:18:50.978067   29681 main.go:141] libmachine: (multinode-562818) Calling .GetConfigRaw
	I1212 20:18:50.979377   29681 main.go:141] libmachine: Creating machine...
	I1212 20:18:50.979400   29681 main.go:141] libmachine: (multinode-562818) Calling .Create
	I1212 20:18:50.979538   29681 main.go:141] libmachine: (multinode-562818) Creating KVM machine...
	I1212 20:18:50.980814   29681 main.go:141] libmachine: (multinode-562818) DBG | found existing default KVM network
	I1212 20:18:50.981411   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:50.981294   29705 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f210}
	I1212 20:18:50.987006   29681 main.go:141] libmachine: (multinode-562818) DBG | trying to create private KVM network mk-multinode-562818 192.168.39.0/24...
	I1212 20:18:51.058736   29681 main.go:141] libmachine: (multinode-562818) DBG | private KVM network mk-multinode-562818 192.168.39.0/24 created
	I1212 20:18:51.058764   29681 main.go:141] libmachine: (multinode-562818) Setting up store path in /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818 ...
	I1212 20:18:51.058780   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:51.058714   29705 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:18:51.058795   29681 main.go:141] libmachine: (multinode-562818) Building disk image from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 20:18:51.058920   29681 main.go:141] libmachine: (multinode-562818) Downloading /home/jenkins/minikube-integration/17734-9188/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 20:18:51.269998   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:51.269843   29705 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa...
	I1212 20:18:51.413734   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:51.413554   29705 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/multinode-562818.rawdisk...
	I1212 20:18:51.413774   29681 main.go:141] libmachine: (multinode-562818) DBG | Writing magic tar header
	I1212 20:18:51.413796   29681 main.go:141] libmachine: (multinode-562818) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818 (perms=drwx------)
	I1212 20:18:51.413823   29681 main.go:141] libmachine: (multinode-562818) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines (perms=drwxr-xr-x)
	I1212 20:18:51.413835   29681 main.go:141] libmachine: (multinode-562818) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube (perms=drwxr-xr-x)
	I1212 20:18:51.413842   29681 main.go:141] libmachine: (multinode-562818) DBG | Writing SSH key tar header
	I1212 20:18:51.413858   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:51.413670   29705 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818 ...
	I1212 20:18:51.413871   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818
	I1212 20:18:51.413878   29681 main.go:141] libmachine: (multinode-562818) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188 (perms=drwxrwxr-x)
	I1212 20:18:51.413890   29681 main.go:141] libmachine: (multinode-562818) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 20:18:51.413897   29681 main.go:141] libmachine: (multinode-562818) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 20:18:51.413907   29681 main.go:141] libmachine: (multinode-562818) Creating domain...
	I1212 20:18:51.413915   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines
	I1212 20:18:51.413927   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:18:51.413938   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188
	I1212 20:18:51.413946   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 20:18:51.413955   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home/jenkins
	I1212 20:18:51.413963   29681 main.go:141] libmachine: (multinode-562818) DBG | Checking permissions on dir: /home
	I1212 20:18:51.413969   29681 main.go:141] libmachine: (multinode-562818) DBG | Skipping /home - not owner
	I1212 20:18:51.415486   29681 main.go:141] libmachine: (multinode-562818) define libvirt domain using xml: 
	I1212 20:18:51.415518   29681 main.go:141] libmachine: (multinode-562818) <domain type='kvm'>
	I1212 20:18:51.415531   29681 main.go:141] libmachine: (multinode-562818)   <name>multinode-562818</name>
	I1212 20:18:51.415544   29681 main.go:141] libmachine: (multinode-562818)   <memory unit='MiB'>2200</memory>
	I1212 20:18:51.415565   29681 main.go:141] libmachine: (multinode-562818)   <vcpu>2</vcpu>
	I1212 20:18:51.415577   29681 main.go:141] libmachine: (multinode-562818)   <features>
	I1212 20:18:51.415611   29681 main.go:141] libmachine: (multinode-562818)     <acpi/>
	I1212 20:18:51.415646   29681 main.go:141] libmachine: (multinode-562818)     <apic/>
	I1212 20:18:51.415660   29681 main.go:141] libmachine: (multinode-562818)     <pae/>
	I1212 20:18:51.415673   29681 main.go:141] libmachine: (multinode-562818)     
	I1212 20:18:51.415689   29681 main.go:141] libmachine: (multinode-562818)   </features>
	I1212 20:18:51.415703   29681 main.go:141] libmachine: (multinode-562818)   <cpu mode='host-passthrough'>
	I1212 20:18:51.415745   29681 main.go:141] libmachine: (multinode-562818)   
	I1212 20:18:51.415769   29681 main.go:141] libmachine: (multinode-562818)   </cpu>
	I1212 20:18:51.415790   29681 main.go:141] libmachine: (multinode-562818)   <os>
	I1212 20:18:51.415819   29681 main.go:141] libmachine: (multinode-562818)     <type>hvm</type>
	I1212 20:18:51.415834   29681 main.go:141] libmachine: (multinode-562818)     <boot dev='cdrom'/>
	I1212 20:18:51.415846   29681 main.go:141] libmachine: (multinode-562818)     <boot dev='hd'/>
	I1212 20:18:51.415860   29681 main.go:141] libmachine: (multinode-562818)     <bootmenu enable='no'/>
	I1212 20:18:51.415871   29681 main.go:141] libmachine: (multinode-562818)   </os>
	I1212 20:18:51.415890   29681 main.go:141] libmachine: (multinode-562818)   <devices>
	I1212 20:18:51.415902   29681 main.go:141] libmachine: (multinode-562818)     <disk type='file' device='cdrom'>
	I1212 20:18:51.415928   29681 main.go:141] libmachine: (multinode-562818)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/boot2docker.iso'/>
	I1212 20:18:51.415948   29681 main.go:141] libmachine: (multinode-562818)       <target dev='hdc' bus='scsi'/>
	I1212 20:18:51.415961   29681 main.go:141] libmachine: (multinode-562818)       <readonly/>
	I1212 20:18:51.415974   29681 main.go:141] libmachine: (multinode-562818)     </disk>
	I1212 20:18:51.415984   29681 main.go:141] libmachine: (multinode-562818)     <disk type='file' device='disk'>
	I1212 20:18:51.416004   29681 main.go:141] libmachine: (multinode-562818)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 20:18:51.416024   29681 main.go:141] libmachine: (multinode-562818)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/multinode-562818.rawdisk'/>
	I1212 20:18:51.416039   29681 main.go:141] libmachine: (multinode-562818)       <target dev='hda' bus='virtio'/>
	I1212 20:18:51.416054   29681 main.go:141] libmachine: (multinode-562818)     </disk>
	I1212 20:18:51.416080   29681 main.go:141] libmachine: (multinode-562818)     <interface type='network'>
	I1212 20:18:51.416106   29681 main.go:141] libmachine: (multinode-562818)       <source network='mk-multinode-562818'/>
	I1212 20:18:51.416117   29681 main.go:141] libmachine: (multinode-562818)       <model type='virtio'/>
	I1212 20:18:51.416128   29681 main.go:141] libmachine: (multinode-562818)     </interface>
	I1212 20:18:51.416151   29681 main.go:141] libmachine: (multinode-562818)     <interface type='network'>
	I1212 20:18:51.416185   29681 main.go:141] libmachine: (multinode-562818)       <source network='default'/>
	I1212 20:18:51.416203   29681 main.go:141] libmachine: (multinode-562818)       <model type='virtio'/>
	I1212 20:18:51.416216   29681 main.go:141] libmachine: (multinode-562818)     </interface>
	I1212 20:18:51.416230   29681 main.go:141] libmachine: (multinode-562818)     <serial type='pty'>
	I1212 20:18:51.416248   29681 main.go:141] libmachine: (multinode-562818)       <target port='0'/>
	I1212 20:18:51.416262   29681 main.go:141] libmachine: (multinode-562818)     </serial>
	I1212 20:18:51.416274   29681 main.go:141] libmachine: (multinode-562818)     <console type='pty'>
	I1212 20:18:51.416286   29681 main.go:141] libmachine: (multinode-562818)       <target type='serial' port='0'/>
	I1212 20:18:51.416299   29681 main.go:141] libmachine: (multinode-562818)     </console>
	I1212 20:18:51.416317   29681 main.go:141] libmachine: (multinode-562818)     <rng model='virtio'>
	I1212 20:18:51.416338   29681 main.go:141] libmachine: (multinode-562818)       <backend model='random'>/dev/random</backend>
	I1212 20:18:51.416352   29681 main.go:141] libmachine: (multinode-562818)     </rng>
	I1212 20:18:51.416363   29681 main.go:141] libmachine: (multinode-562818)     
	I1212 20:18:51.416376   29681 main.go:141] libmachine: (multinode-562818)     
	I1212 20:18:51.416388   29681 main.go:141] libmachine: (multinode-562818)   </devices>
	I1212 20:18:51.416401   29681 main.go:141] libmachine: (multinode-562818) </domain>
	I1212 20:18:51.416415   29681 main.go:141] libmachine: (multinode-562818) 
	I1212 20:18:51.420738   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:50:19:ba in network default
	I1212 20:18:51.421321   29681 main.go:141] libmachine: (multinode-562818) Ensuring networks are active...
	I1212 20:18:51.421352   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:51.421950   29681 main.go:141] libmachine: (multinode-562818) Ensuring network default is active
	I1212 20:18:51.422213   29681 main.go:141] libmachine: (multinode-562818) Ensuring network mk-multinode-562818 is active
	I1212 20:18:51.422721   29681 main.go:141] libmachine: (multinode-562818) Getting domain xml...
	I1212 20:18:51.423363   29681 main.go:141] libmachine: (multinode-562818) Creating domain...
	I1212 20:18:52.645311   29681 main.go:141] libmachine: (multinode-562818) Waiting to get IP...
	I1212 20:18:52.646045   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:52.646487   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:52.646531   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:52.646457   29705 retry.go:31] will retry after 212.037611ms: waiting for machine to come up
	I1212 20:18:52.859643   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:52.860081   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:52.860104   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:52.860028   29705 retry.go:31] will retry after 375.585578ms: waiting for machine to come up
	I1212 20:18:53.237618   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:53.238055   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:53.238087   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:53.238007   29705 retry.go:31] will retry after 311.629674ms: waiting for machine to come up
	I1212 20:18:53.551583   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:53.552067   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:53.552098   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:53.552015   29705 retry.go:31] will retry after 508.776059ms: waiting for machine to come up
	I1212 20:18:54.062772   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:54.063140   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:54.063170   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:54.063100   29705 retry.go:31] will retry after 516.642561ms: waiting for machine to come up
	I1212 20:18:54.581907   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:54.582322   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:54.582349   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:54.582278   29705 retry.go:31] will retry after 703.09022ms: waiting for machine to come up
	I1212 20:18:55.287017   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:55.287398   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:55.287422   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:55.287351   29705 retry.go:31] will retry after 1.000233277s: waiting for machine to come up
	I1212 20:18:56.288774   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:56.289155   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:56.289184   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:56.289113   29705 retry.go:31] will retry after 1.292235811s: waiting for machine to come up
	I1212 20:18:57.583527   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:57.583845   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:57.583877   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:57.583783   29705 retry.go:31] will retry after 1.326071401s: waiting for machine to come up
	I1212 20:18:58.912208   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:18:58.912672   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:18:58.912695   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:18:58.912606   29705 retry.go:31] will retry after 1.571134302s: waiting for machine to come up
	I1212 20:19:00.485321   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:00.485639   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:19:00.485668   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:19:00.485583   29705 retry.go:31] will retry after 2.338579252s: waiting for machine to come up
	I1212 20:19:02.827023   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:02.827436   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:19:02.827469   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:19:02.827381   29705 retry.go:31] will retry after 2.382247905s: waiting for machine to come up
	I1212 20:19:05.212405   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:05.212780   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:19:05.212803   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:19:05.212714   29705 retry.go:31] will retry after 3.806392063s: waiting for machine to come up
	I1212 20:19:09.023140   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:09.023510   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:19:09.023533   29681 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:19:09.023468   29705 retry.go:31] will retry after 4.986170317s: waiting for machine to come up
	I1212 20:19:14.013417   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.013809   29681 main.go:141] libmachine: (multinode-562818) Found IP for machine: 192.168.39.77
	I1212 20:19:14.013850   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has current primary IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.013859   29681 main.go:141] libmachine: (multinode-562818) Reserving static IP address...
	I1212 20:19:14.014218   29681 main.go:141] libmachine: (multinode-562818) DBG | unable to find host DHCP lease matching {name: "multinode-562818", mac: "52:54:00:25:49:23", ip: "192.168.39.77"} in network mk-multinode-562818
	I1212 20:19:14.086411   29681 main.go:141] libmachine: (multinode-562818) Reserved static IP address: 192.168.39.77
	I1212 20:19:14.086435   29681 main.go:141] libmachine: (multinode-562818) Waiting for SSH to be available...
	I1212 20:19:14.086445   29681 main.go:141] libmachine: (multinode-562818) DBG | Getting to WaitForSSH function...
	I1212 20:19:14.089170   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.089674   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.089704   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.089841   29681 main.go:141] libmachine: (multinode-562818) DBG | Using SSH client type: external
	I1212 20:19:14.089869   29681 main.go:141] libmachine: (multinode-562818) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa (-rw-------)
	I1212 20:19:14.089902   29681 main.go:141] libmachine: (multinode-562818) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 20:19:14.089918   29681 main.go:141] libmachine: (multinode-562818) DBG | About to run SSH command:
	I1212 20:19:14.089936   29681 main.go:141] libmachine: (multinode-562818) DBG | exit 0
	I1212 20:19:14.186983   29681 main.go:141] libmachine: (multinode-562818) DBG | SSH cmd err, output: <nil>: 
	I1212 20:19:14.187272   29681 main.go:141] libmachine: (multinode-562818) KVM machine creation complete!
	I1212 20:19:14.187611   29681 main.go:141] libmachine: (multinode-562818) Calling .GetConfigRaw
	I1212 20:19:14.188118   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:14.188302   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:14.188457   29681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 20:19:14.188474   29681 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:19:14.189631   29681 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 20:19:14.189645   29681 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 20:19:14.189651   29681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 20:19:14.189657   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:14.192015   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.192325   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.192352   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.192442   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:14.192608   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.192744   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.192916   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:14.193072   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:19:14.193413   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:19:14.193426   29681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 20:19:14.322637   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:19:14.322660   29681 main.go:141] libmachine: Detecting the provisioner...
	I1212 20:19:14.322668   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:14.325481   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.325794   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.325828   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.325937   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:14.326128   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.326309   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.326490   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:14.326668   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:19:14.326997   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:19:14.327013   29681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 20:19:14.456553   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 20:19:14.456661   29681 main.go:141] libmachine: found compatible host: buildroot
	I1212 20:19:14.456681   29681 main.go:141] libmachine: Provisioning with buildroot...
	I1212 20:19:14.456698   29681 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:19:14.456980   29681 buildroot.go:166] provisioning hostname "multinode-562818"
	I1212 20:19:14.457011   29681 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:19:14.457152   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:14.459740   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.460086   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.460124   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.460306   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:14.460464   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.460617   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.460730   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:14.460904   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:19:14.461304   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:19:14.461320   29681 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-562818 && echo "multinode-562818" | sudo tee /etc/hostname
	I1212 20:19:14.604575   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-562818
	
	I1212 20:19:14.604611   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:14.607346   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.607708   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.607757   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.607916   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:14.608112   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.608259   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.608398   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:14.608546   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:19:14.608871   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:19:14.608890   29681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-562818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-562818/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-562818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:19:14.744125   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:19:14.744151   29681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:19:14.744183   29681 buildroot.go:174] setting up certificates
	I1212 20:19:14.744196   29681 provision.go:83] configureAuth start
	I1212 20:19:14.744211   29681 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:19:14.744523   29681 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:19:14.747089   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.747407   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.747456   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.747590   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:14.749734   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.750054   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.750076   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.750224   29681 provision.go:138] copyHostCerts
	I1212 20:19:14.750249   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:19:14.750275   29681 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:19:14.750291   29681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:19:14.750343   29681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:19:14.750430   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:19:14.750447   29681 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:19:14.750454   29681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:19:14.750472   29681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:19:14.750522   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:19:14.750540   29681 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:19:14.750546   29681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:19:14.750562   29681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:19:14.750613   29681 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.multinode-562818 san=[192.168.39.77 192.168.39.77 localhost 127.0.0.1 minikube multinode-562818]
	I1212 20:19:14.923789   29681 provision.go:172] copyRemoteCerts
	I1212 20:19:14.923888   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:19:14.923914   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:14.926338   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.926620   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:14.926653   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:14.926789   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:14.926979   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:14.927130   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:14.927281   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:19:15.020049   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:19:15.020123   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 20:19:15.043120   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:19:15.043253   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:19:15.067155   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:19:15.067223   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:19:15.089909   29681 provision.go:86] duration metric: configureAuth took 345.697247ms
	I1212 20:19:15.089949   29681 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:19:15.090170   29681 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:19:15.090318   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:15.093050   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.093360   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.093391   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.093767   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:15.093967   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.094127   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.094271   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:15.094447   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:19:15.094867   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:19:15.094890   29681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:19:15.410699   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:19:15.410732   29681 main.go:141] libmachine: Checking connection to Docker...
	I1212 20:19:15.410743   29681 main.go:141] libmachine: (multinode-562818) Calling .GetURL
	I1212 20:19:15.411953   29681 main.go:141] libmachine: (multinode-562818) DBG | Using libvirt version 6000000
	I1212 20:19:15.413911   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.414244   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.414276   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.414410   29681 main.go:141] libmachine: Docker is up and running!
	I1212 20:19:15.414423   29681 main.go:141] libmachine: Reticulating splines...
	I1212 20:19:15.414429   29681 client.go:171] LocalClient.Create took 24.43744466s
	I1212 20:19:15.414450   29681 start.go:167] duration metric: libmachine.API.Create for "multinode-562818" took 24.437502334s
	I1212 20:19:15.414462   29681 start.go:300] post-start starting for "multinode-562818" (driver="kvm2")
	I1212 20:19:15.414475   29681 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:19:15.414489   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:15.414735   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:19:15.414758   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:15.417015   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.417339   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.417368   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.417543   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:15.417712   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.417977   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:15.418125   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:19:15.513405   29681 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:19:15.517591   29681 command_runner.go:130] > NAME=Buildroot
	I1212 20:19:15.517616   29681 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 20:19:15.517622   29681 command_runner.go:130] > ID=buildroot
	I1212 20:19:15.517628   29681 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 20:19:15.517633   29681 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 20:19:15.517842   29681 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:19:15.517863   29681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:19:15.517926   29681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:19:15.518021   29681 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:19:15.518032   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /etc/ssl/certs/164562.pem
	I1212 20:19:15.518140   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:19:15.527517   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:19:15.549932   29681 start.go:303] post-start completed in 135.454152ms
	I1212 20:19:15.550008   29681 main.go:141] libmachine: (multinode-562818) Calling .GetConfigRaw
	I1212 20:19:15.550581   29681 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:19:15.553024   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.553306   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.553335   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.553564   29681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:19:15.553753   29681 start.go:128] duration metric: createHost completed in 24.594399176s
	I1212 20:19:15.553776   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:15.556078   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.556397   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.556428   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.556564   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:15.556722   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.556884   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.556998   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:15.557167   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:19:15.557632   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:19:15.557647   29681 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:19:15.684030   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412355.666593611
	
	I1212 20:19:15.684058   29681 fix.go:206] guest clock: 1702412355.666593611
	I1212 20:19:15.684069   29681 fix.go:219] Guest: 2023-12-12 20:19:15.666593611 +0000 UTC Remote: 2023-12-12 20:19:15.553764642 +0000 UTC m=+24.713000820 (delta=112.828969ms)
	I1212 20:19:15.684086   29681 fix.go:190] guest clock delta is within tolerance: 112.828969ms
	I1212 20:19:15.684091   29681 start.go:83] releasing machines lock for "multinode-562818", held for 24.724856976s
	I1212 20:19:15.684107   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:15.684365   29681 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:19:15.686858   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.687226   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.687301   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.687406   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:15.687978   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:15.688202   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:15.688270   29681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:19:15.688308   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:15.688430   29681 ssh_runner.go:195] Run: cat /version.json
	I1212 20:19:15.688456   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:15.690950   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.691079   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.691334   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.691360   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.691453   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:15.691509   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:15.691540   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:15.691632   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:15.691666   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.691794   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:15.691799   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:15.691967   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:15.691967   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:19:15.692112   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:19:15.779797   29681 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 20:19:15.780447   29681 ssh_runner.go:195] Run: systemctl --version
	I1212 20:19:15.812499   29681 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:19:15.812552   29681 command_runner.go:130] > systemd 247 (247)
	I1212 20:19:15.812569   29681 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 20:19:15.812632   29681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:19:15.972036   29681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:19:15.978034   29681 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:19:15.978406   29681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:19:15.978468   29681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:19:15.992818   29681 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 20:19:15.992878   29681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:19:15.992888   29681 start.go:475] detecting cgroup driver to use...
	I1212 20:19:15.992970   29681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:19:16.007493   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:19:16.021982   29681 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:19:16.022044   29681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:19:16.036872   29681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:19:16.051404   29681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:19:16.160918   29681 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 20:19:16.161006   29681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:19:16.174929   29681 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 20:19:16.271665   29681 docker.go:219] disabling docker service ...
	I1212 20:19:16.271749   29681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:19:16.285517   29681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:19:16.297780   29681 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 20:19:16.297858   29681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:19:16.398541   29681 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 20:19:16.398619   29681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:19:16.503797   29681 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 20:19:16.503840   29681 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 20:19:16.503906   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:19:16.516426   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:19:16.533379   29681 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:19:16.533437   29681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 20:19:16.533492   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:19:16.542722   29681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:19:16.542780   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:19:16.552113   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:19:16.561632   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:19:16.571973   29681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:19:16.581842   29681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:19:16.590829   29681 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:19:16.591227   29681 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:19:16.591292   29681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 20:19:16.603520   29681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:19:16.612238   29681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:19:16.713178   29681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:19:16.885311   29681 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:19:16.885376   29681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:19:16.890679   29681 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:19:16.890704   29681 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:19:16.890711   29681 command_runner.go:130] > Device: 16h/22d	Inode: 820         Links: 1
	I1212 20:19:16.890720   29681 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:19:16.890728   29681 command_runner.go:130] > Access: 2023-12-12 20:19:16.853293035 +0000
	I1212 20:19:16.890737   29681 command_runner.go:130] > Modify: 2023-12-12 20:19:16.853293035 +0000
	I1212 20:19:16.890745   29681 command_runner.go:130] > Change: 2023-12-12 20:19:16.853293035 +0000
	I1212 20:19:16.890751   29681 command_runner.go:130] >  Birth: -
	I1212 20:19:16.890772   29681 start.go:543] Will wait 60s for crictl version
	I1212 20:19:16.890823   29681 ssh_runner.go:195] Run: which crictl
	I1212 20:19:16.894731   29681 command_runner.go:130] > /usr/bin/crictl
	I1212 20:19:16.894808   29681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:19:16.931773   29681 command_runner.go:130] > Version:  0.1.0
	I1212 20:19:16.931814   29681 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:19:16.931820   29681 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 20:19:16.931826   29681 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:19:16.933310   29681 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:19:16.933387   29681 ssh_runner.go:195] Run: crio --version
	I1212 20:19:16.982140   29681 command_runner.go:130] > crio version 1.24.1
	I1212 20:19:16.982163   29681 command_runner.go:130] > Version:          1.24.1
	I1212 20:19:16.982170   29681 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:19:16.982175   29681 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:19:16.982182   29681 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:19:16.982186   29681 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:19:16.982191   29681 command_runner.go:130] > Compiler:         gc
	I1212 20:19:16.982197   29681 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:19:16.982205   29681 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:19:16.982216   29681 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:19:16.982223   29681 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:19:16.982231   29681 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:19:16.982339   29681 ssh_runner.go:195] Run: crio --version
	I1212 20:19:17.033079   29681 command_runner.go:130] > crio version 1.24.1
	I1212 20:19:17.033119   29681 command_runner.go:130] > Version:          1.24.1
	I1212 20:19:17.033129   29681 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:19:17.033136   29681 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:19:17.033163   29681 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:19:17.033171   29681 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:19:17.033179   29681 command_runner.go:130] > Compiler:         gc
	I1212 20:19:17.033186   29681 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:19:17.033199   29681 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:19:17.033214   29681 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:19:17.033224   29681 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:19:17.033231   29681 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:19:17.035466   29681 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 20:19:17.036996   29681 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:19:17.039645   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:17.039959   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:17.039991   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:17.040167   29681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:19:17.044616   29681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:19:17.057928   29681 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:19:17.057990   29681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:19:17.092358   29681 command_runner.go:130] > {
	I1212 20:19:17.092383   29681 command_runner.go:130] >   "images": [
	I1212 20:19:17.092390   29681 command_runner.go:130] >   ]
	I1212 20:19:17.092395   29681 command_runner.go:130] > }
	I1212 20:19:17.093616   29681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 20:19:17.093676   29681 ssh_runner.go:195] Run: which lz4
	I1212 20:19:17.097442   29681 command_runner.go:130] > /usr/bin/lz4
	I1212 20:19:17.097839   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 20:19:17.097920   29681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 20:19:17.102257   29681 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:19:17.102293   29681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:19:17.102310   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 20:19:18.892775   29681 crio.go:444] Took 1.794875 seconds to copy over tarball
	I1212 20:19:18.892865   29681 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:19:21.656923   29681 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.764024087s)
	I1212 20:19:21.656960   29681 crio.go:451] Took 2.764150 seconds to extract the tarball
	I1212 20:19:21.656972   29681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:19:21.697338   29681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:19:21.779914   29681 command_runner.go:130] > {
	I1212 20:19:21.779937   29681 command_runner.go:130] >   "images": [
	I1212 20:19:21.779941   29681 command_runner.go:130] >     {
	I1212 20:19:21.779948   29681 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 20:19:21.779963   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.779969   29681 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 20:19:21.779973   29681 command_runner.go:130] >       ],
	I1212 20:19:21.779977   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.779986   29681 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 20:19:21.779994   29681 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 20:19:21.780000   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780008   29681 command_runner.go:130] >       "size": "65258016",
	I1212 20:19:21.780018   29681 command_runner.go:130] >       "uid": null,
	I1212 20:19:21.780026   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.780033   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.780039   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.780043   29681 command_runner.go:130] >     },
	I1212 20:19:21.780047   29681 command_runner.go:130] >     {
	I1212 20:19:21.780056   29681 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 20:19:21.780060   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.780065   29681 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:19:21.780071   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780077   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.780087   29681 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 20:19:21.780103   29681 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 20:19:21.780109   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780119   29681 command_runner.go:130] >       "size": "31470524",
	I1212 20:19:21.780125   29681 command_runner.go:130] >       "uid": null,
	I1212 20:19:21.780133   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.780138   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.780142   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.780146   29681 command_runner.go:130] >     },
	I1212 20:19:21.780150   29681 command_runner.go:130] >     {
	I1212 20:19:21.780156   29681 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 20:19:21.780161   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.780166   29681 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 20:19:21.780170   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780180   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.780193   29681 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 20:19:21.780209   29681 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 20:19:21.780222   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780234   29681 command_runner.go:130] >       "size": "53621675",
	I1212 20:19:21.780240   29681 command_runner.go:130] >       "uid": null,
	I1212 20:19:21.780247   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.780251   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.780258   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.780263   29681 command_runner.go:130] >     },
	I1212 20:19:21.780273   29681 command_runner.go:130] >     {
	I1212 20:19:21.780287   29681 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 20:19:21.780297   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.780306   29681 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 20:19:21.780315   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780325   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.780337   29681 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 20:19:21.780349   29681 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 20:19:21.780371   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780393   29681 command_runner.go:130] >       "size": "295456551",
	I1212 20:19:21.780399   29681 command_runner.go:130] >       "uid": {
	I1212 20:19:21.780408   29681 command_runner.go:130] >         "value": "0"
	I1212 20:19:21.780417   29681 command_runner.go:130] >       },
	I1212 20:19:21.780425   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.780430   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.780440   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.780450   29681 command_runner.go:130] >     },
	I1212 20:19:21.780460   29681 command_runner.go:130] >     {
	I1212 20:19:21.780473   29681 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 20:19:21.780483   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.780495   29681 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 20:19:21.780503   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780509   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.780522   29681 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 20:19:21.780538   29681 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 20:19:21.780548   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780558   29681 command_runner.go:130] >       "size": "127226832",
	I1212 20:19:21.780568   29681 command_runner.go:130] >       "uid": {
	I1212 20:19:21.780578   29681 command_runner.go:130] >         "value": "0"
	I1212 20:19:21.780588   29681 command_runner.go:130] >       },
	I1212 20:19:21.780596   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.780600   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.780610   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.780628   29681 command_runner.go:130] >     },
	I1212 20:19:21.780637   29681 command_runner.go:130] >     {
	I1212 20:19:21.780732   29681 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 20:19:21.780758   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.780769   29681 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 20:19:21.780775   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780782   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.780796   29681 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 20:19:21.780812   29681 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 20:19:21.780823   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780832   29681 command_runner.go:130] >       "size": "123261750",
	I1212 20:19:21.780842   29681 command_runner.go:130] >       "uid": {
	I1212 20:19:21.780849   29681 command_runner.go:130] >         "value": "0"
	I1212 20:19:21.780859   29681 command_runner.go:130] >       },
	I1212 20:19:21.780869   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.780878   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.780883   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.780890   29681 command_runner.go:130] >     },
	I1212 20:19:21.780896   29681 command_runner.go:130] >     {
	I1212 20:19:21.780910   29681 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 20:19:21.780921   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.780932   29681 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 20:19:21.780942   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780949   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.780964   29681 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 20:19:21.780975   29681 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 20:19:21.780985   29681 command_runner.go:130] >       ],
	I1212 20:19:21.780996   29681 command_runner.go:130] >       "size": "74749335",
	I1212 20:19:21.781006   29681 command_runner.go:130] >       "uid": null,
	I1212 20:19:21.781016   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.781026   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.781034   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.781048   29681 command_runner.go:130] >     },
	I1212 20:19:21.781055   29681 command_runner.go:130] >     {
	I1212 20:19:21.781064   29681 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 20:19:21.781071   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.781081   29681 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 20:19:21.781090   29681 command_runner.go:130] >       ],
	I1212 20:19:21.781108   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.781144   29681 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 20:19:21.781160   29681 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 20:19:21.781170   29681 command_runner.go:130] >       ],
	I1212 20:19:21.781180   29681 command_runner.go:130] >       "size": "61551410",
	I1212 20:19:21.781190   29681 command_runner.go:130] >       "uid": {
	I1212 20:19:21.781200   29681 command_runner.go:130] >         "value": "0"
	I1212 20:19:21.781210   29681 command_runner.go:130] >       },
	I1212 20:19:21.781219   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.781227   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.781238   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.781247   29681 command_runner.go:130] >     },
	I1212 20:19:21.781260   29681 command_runner.go:130] >     {
	I1212 20:19:21.781273   29681 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 20:19:21.781284   29681 command_runner.go:130] >       "repoTags": [
	I1212 20:19:21.781292   29681 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 20:19:21.781301   29681 command_runner.go:130] >       ],
	I1212 20:19:21.781309   29681 command_runner.go:130] >       "repoDigests": [
	I1212 20:19:21.781320   29681 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 20:19:21.781336   29681 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 20:19:21.781346   29681 command_runner.go:130] >       ],
	I1212 20:19:21.781353   29681 command_runner.go:130] >       "size": "750414",
	I1212 20:19:21.781360   29681 command_runner.go:130] >       "uid": {
	I1212 20:19:21.781370   29681 command_runner.go:130] >         "value": "65535"
	I1212 20:19:21.781376   29681 command_runner.go:130] >       },
	I1212 20:19:21.781386   29681 command_runner.go:130] >       "username": "",
	I1212 20:19:21.781395   29681 command_runner.go:130] >       "spec": null,
	I1212 20:19:21.781402   29681 command_runner.go:130] >       "pinned": false
	I1212 20:19:21.781407   29681 command_runner.go:130] >     }
	I1212 20:19:21.781415   29681 command_runner.go:130] >   ]
	I1212 20:19:21.781429   29681 command_runner.go:130] > }
	I1212 20:19:21.781582   29681 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 20:19:21.781598   29681 cache_images.go:84] Images are preloaded, skipping loading
	I1212 20:19:21.781669   29681 ssh_runner.go:195] Run: crio config
	I1212 20:19:21.840831   29681 command_runner.go:130] ! time="2023-12-12 20:19:21.832061285Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 20:19:21.840917   29681 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:19:21.849120   29681 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:19:21.849158   29681 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:19:21.849169   29681 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:19:21.849175   29681 command_runner.go:130] > #
	I1212 20:19:21.849186   29681 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:19:21.849196   29681 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:19:21.849207   29681 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:19:21.849221   29681 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:19:21.849230   29681 command_runner.go:130] > # reload'.
	I1212 20:19:21.849240   29681 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:19:21.849257   29681 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:19:21.849272   29681 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:19:21.849284   29681 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:19:21.849293   29681 command_runner.go:130] > [crio]
	I1212 20:19:21.849306   29681 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:19:21.849351   29681 command_runner.go:130] > # containers images, in this directory.
	I1212 20:19:21.849364   29681 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 20:19:21.849385   29681 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:19:21.849397   29681 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 20:19:21.849410   29681 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:19:21.849422   29681 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:19:21.849434   29681 command_runner.go:130] > storage_driver = "overlay"
	I1212 20:19:21.849446   29681 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:19:21.849459   29681 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:19:21.849469   29681 command_runner.go:130] > storage_option = [
	I1212 20:19:21.849480   29681 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 20:19:21.849488   29681 command_runner.go:130] > ]
	I1212 20:19:21.849501   29681 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:19:21.849513   29681 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:19:21.849529   29681 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:19:21.849542   29681 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:19:21.849557   29681 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:19:21.849567   29681 command_runner.go:130] > # always happen on a node reboot
	I1212 20:19:21.849578   29681 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:19:21.849589   29681 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:19:21.849602   29681 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:19:21.849623   29681 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:19:21.849635   29681 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 20:19:21.849649   29681 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:19:21.849660   29681 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:19:21.849667   29681 command_runner.go:130] > # internal_wipe = true
	I1212 20:19:21.849673   29681 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:19:21.849681   29681 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:19:21.849687   29681 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:19:21.849699   29681 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:19:21.849708   29681 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:19:21.849713   29681 command_runner.go:130] > [crio.api]
	I1212 20:19:21.849721   29681 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:19:21.849731   29681 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:19:21.849739   29681 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:19:21.849746   29681 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:19:21.849753   29681 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:19:21.849762   29681 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:19:21.849771   29681 command_runner.go:130] > # stream_port = "0"
	I1212 20:19:21.849782   29681 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:19:21.849792   29681 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:19:21.849805   29681 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:19:21.849815   29681 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:19:21.849828   29681 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:19:21.849840   29681 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 20:19:21.849849   29681 command_runner.go:130] > # minutes.
	I1212 20:19:21.849858   29681 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:19:21.849871   29681 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:19:21.849884   29681 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 20:19:21.849894   29681 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:19:21.849912   29681 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:19:21.849926   29681 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:19:21.849937   29681 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 20:19:21.849943   29681 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:19:21.849951   29681 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:19:21.849958   29681 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 20:19:21.849966   29681 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:19:21.849973   29681 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 20:19:21.849998   29681 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:19:21.850011   29681 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:19:21.850014   29681 command_runner.go:130] > [crio.runtime]
	I1212 20:19:21.850020   29681 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:19:21.850028   29681 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:19:21.850035   29681 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:19:21.850042   29681 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:19:21.850048   29681 command_runner.go:130] > # default_ulimits = [
	I1212 20:19:21.850052   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850060   29681 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:19:21.850068   29681 command_runner.go:130] > # no_pivot = false
	I1212 20:19:21.850076   29681 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:19:21.850085   29681 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:19:21.850092   29681 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:19:21.850098   29681 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:19:21.850105   29681 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:19:21.850112   29681 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:19:21.850119   29681 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 20:19:21.850124   29681 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:19:21.850133   29681 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:19:21.850139   29681 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:19:21.850145   29681 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:19:21.850153   29681 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:19:21.850159   29681 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:19:21.850165   29681 command_runner.go:130] > conmon_env = [
	I1212 20:19:21.850171   29681 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 20:19:21.850179   29681 command_runner.go:130] > ]
	I1212 20:19:21.850187   29681 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:19:21.850194   29681 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:19:21.850202   29681 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:19:21.850207   29681 command_runner.go:130] > # default_env = [
	I1212 20:19:21.850211   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850219   29681 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:19:21.850225   29681 command_runner.go:130] > # selinux = false
	I1212 20:19:21.850231   29681 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:19:21.850238   29681 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 20:19:21.850245   29681 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 20:19:21.850249   29681 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:19:21.850257   29681 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 20:19:21.850265   29681 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 20:19:21.850271   29681 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 20:19:21.850278   29681 command_runner.go:130] > # which might increase security.
	I1212 20:19:21.850283   29681 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 20:19:21.850291   29681 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:19:21.850299   29681 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:19:21.850308   29681 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:19:21.850316   29681 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:19:21.850323   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:19:21.850328   29681 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:19:21.850336   29681 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:19:21.850343   29681 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:19:21.850348   29681 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:19:21.850356   29681 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:19:21.850361   29681 command_runner.go:130] > # irqbalance daemon.
	I1212 20:19:21.850368   29681 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:19:21.850377   29681 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:19:21.850384   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:19:21.850389   29681 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:19:21.850396   29681 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:19:21.850401   29681 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:19:21.850409   29681 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:19:21.850416   29681 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:19:21.850423   29681 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:19:21.850431   29681 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:19:21.850439   29681 command_runner.go:130] > # will be added.
	I1212 20:19:21.850445   29681 command_runner.go:130] > # default_capabilities = [
	I1212 20:19:21.850449   29681 command_runner.go:130] > # 	"CHOWN",
	I1212 20:19:21.850453   29681 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:19:21.850460   29681 command_runner.go:130] > # 	"FSETID",
	I1212 20:19:21.850464   29681 command_runner.go:130] > # 	"FOWNER",
	I1212 20:19:21.850470   29681 command_runner.go:130] > # 	"SETGID",
	I1212 20:19:21.850474   29681 command_runner.go:130] > # 	"SETUID",
	I1212 20:19:21.850480   29681 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:19:21.850484   29681 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:19:21.850490   29681 command_runner.go:130] > # 	"KILL",
	I1212 20:19:21.850493   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850502   29681 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:19:21.850510   29681 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:19:21.850517   29681 command_runner.go:130] > # default_sysctls = [
	I1212 20:19:21.850520   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850527   29681 command_runner.go:130] > # List of devices on the host that a
	I1212 20:19:21.850534   29681 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:19:21.850542   29681 command_runner.go:130] > # allowed_devices = [
	I1212 20:19:21.850546   29681 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:19:21.850550   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850557   29681 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:19:21.850565   29681 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:19:21.850583   29681 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:19:21.850616   29681 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:19:21.850623   29681 command_runner.go:130] > # additional_devices = [
	I1212 20:19:21.850630   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850638   29681 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:19:21.850642   29681 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:19:21.850649   29681 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:19:21.850652   29681 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:19:21.850658   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850664   29681 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:19:21.850672   29681 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:19:21.850677   29681 command_runner.go:130] > # Defaults to false.
	I1212 20:19:21.850682   29681 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:19:21.850701   29681 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:19:21.850710   29681 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:19:21.850714   29681 command_runner.go:130] > # hooks_dir = [
	I1212 20:19:21.850721   29681 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:19:21.850725   29681 command_runner.go:130] > # ]
	I1212 20:19:21.850733   29681 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:19:21.850742   29681 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:19:21.850748   29681 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:19:21.850753   29681 command_runner.go:130] > #
	I1212 20:19:21.850762   29681 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:19:21.850774   29681 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:19:21.850786   29681 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:19:21.850796   29681 command_runner.go:130] > #
	I1212 20:19:21.850808   29681 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:19:21.850821   29681 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:19:21.850835   29681 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:19:21.850846   29681 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:19:21.850854   29681 command_runner.go:130] > #
	I1212 20:19:21.850868   29681 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:19:21.850877   29681 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:19:21.850890   29681 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:19:21.850900   29681 command_runner.go:130] > pids_limit = 1024
	I1212 20:19:21.850912   29681 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:19:21.850925   29681 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:19:21.850938   29681 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:19:21.850954   29681 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:19:21.850963   29681 command_runner.go:130] > # log_size_max = -1
	I1212 20:19:21.850970   29681 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 20:19:21.850981   29681 command_runner.go:130] > # log_to_journald = false
	I1212 20:19:21.850990   29681 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:19:21.850997   29681 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:19:21.851003   29681 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:19:21.851008   29681 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:19:21.851017   29681 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:19:21.851027   29681 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:19:21.851034   29681 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:19:21.851042   29681 command_runner.go:130] > # read_only = false
	I1212 20:19:21.851051   29681 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:19:21.851057   29681 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:19:21.851063   29681 command_runner.go:130] > # live configuration reload.
	I1212 20:19:21.851067   29681 command_runner.go:130] > # log_level = "info"
	I1212 20:19:21.851075   29681 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:19:21.851083   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:19:21.851089   29681 command_runner.go:130] > # log_filter = ""
	I1212 20:19:21.851098   29681 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:19:21.851105   29681 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:19:21.851118   29681 command_runner.go:130] > # separated by comma.
	I1212 20:19:21.851125   29681 command_runner.go:130] > # uid_mappings = ""
	I1212 20:19:21.851131   29681 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:19:21.851140   29681 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:19:21.851144   29681 command_runner.go:130] > # separated by comma.
	I1212 20:19:21.851150   29681 command_runner.go:130] > # gid_mappings = ""
	I1212 20:19:21.851156   29681 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:19:21.851164   29681 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:19:21.851180   29681 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:19:21.851187   29681 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:19:21.851194   29681 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:19:21.851202   29681 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:19:21.851211   29681 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:19:21.851217   29681 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:19:21.851223   29681 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:19:21.851232   29681 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:19:21.851253   29681 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:19:21.851264   29681 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:19:21.851272   29681 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:19:21.851278   29681 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:19:21.851285   29681 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:19:21.851290   29681 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:19:21.851301   29681 command_runner.go:130] > drop_infra_ctr = false
	I1212 20:19:21.851309   29681 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:19:21.851315   29681 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:19:21.851324   29681 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:19:21.851332   29681 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:19:21.851338   29681 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:19:21.851345   29681 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:19:21.851350   29681 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:19:21.851359   29681 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:19:21.851366   29681 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 20:19:21.851372   29681 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:19:21.851381   29681 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 20:19:21.851392   29681 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 20:19:21.851399   29681 command_runner.go:130] > # default_runtime = "runc"
	I1212 20:19:21.851405   29681 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:19:21.851414   29681 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:19:21.851426   29681 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 20:19:21.851433   29681 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:19:21.851441   29681 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:19:21.851448   29681 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:19:21.851453   29681 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:19:21.851459   29681 command_runner.go:130] > # ]
	I1212 20:19:21.851474   29681 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:19:21.851483   29681 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:19:21.851492   29681 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 20:19:21.851501   29681 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 20:19:21.851506   29681 command_runner.go:130] > #
	I1212 20:19:21.851511   29681 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 20:19:21.851519   29681 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 20:19:21.851526   29681 command_runner.go:130] > #  runtime_type = "oci"
	I1212 20:19:21.851531   29681 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 20:19:21.851538   29681 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 20:19:21.851543   29681 command_runner.go:130] > #  allowed_annotations = []
	I1212 20:19:21.851549   29681 command_runner.go:130] > # Where:
	I1212 20:19:21.851554   29681 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 20:19:21.851563   29681 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 20:19:21.851572   29681 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:19:21.851580   29681 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:19:21.851584   29681 command_runner.go:130] > #   in $PATH.
	I1212 20:19:21.851592   29681 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 20:19:21.851600   29681 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:19:21.851609   29681 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 20:19:21.851615   29681 command_runner.go:130] > #   state.
	I1212 20:19:21.851621   29681 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:19:21.851629   29681 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:19:21.851637   29681 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:19:21.851645   29681 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:19:21.851651   29681 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:19:21.851662   29681 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:19:21.851670   29681 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:19:21.851678   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:19:21.851687   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:19:21.851699   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:19:21.851709   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:19:21.851719   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:19:21.851728   29681 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:19:21.851736   29681 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:19:21.851744   29681 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 20:19:21.851751   29681 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:19:21.851756   29681 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:19:21.851767   29681 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 20:19:21.851777   29681 command_runner.go:130] > runtime_type = "oci"
	I1212 20:19:21.851787   29681 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:19:21.851798   29681 command_runner.go:130] > runtime_config_path = ""
	I1212 20:19:21.851807   29681 command_runner.go:130] > monitor_path = ""
	I1212 20:19:21.851817   29681 command_runner.go:130] > monitor_cgroup = ""
	I1212 20:19:21.851827   29681 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:19:21.851839   29681 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 20:19:21.851848   29681 command_runner.go:130] > # running containers
	I1212 20:19:21.851858   29681 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 20:19:21.851869   29681 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 20:19:21.851920   29681 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 20:19:21.851935   29681 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 20:19:21.851940   29681 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 20:19:21.851945   29681 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 20:19:21.851952   29681 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 20:19:21.851958   29681 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 20:19:21.851965   29681 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 20:19:21.851973   29681 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 20:19:21.851979   29681 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:19:21.851986   29681 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:19:21.851996   29681 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:19:21.852004   29681 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:19:21.852014   29681 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 20:19:21.852026   29681 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:19:21.852038   29681 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:19:21.852047   29681 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:19:21.852055   29681 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:19:21.852063   29681 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:19:21.852069   29681 command_runner.go:130] > # Example:
	I1212 20:19:21.852074   29681 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:19:21.852081   29681 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:19:21.852086   29681 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:19:21.852094   29681 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:19:21.852100   29681 command_runner.go:130] > # cpuset = 0
	I1212 20:19:21.852104   29681 command_runner.go:130] > # cpushares = "0-1"
	I1212 20:19:21.852111   29681 command_runner.go:130] > # Where:
	I1212 20:19:21.852116   29681 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:19:21.852125   29681 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:19:21.852132   29681 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:19:21.852140   29681 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:19:21.852148   29681 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:19:21.852156   29681 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:19:21.852162   29681 command_runner.go:130] > # 
	I1212 20:19:21.852168   29681 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:19:21.852174   29681 command_runner.go:130] > #
	I1212 20:19:21.852180   29681 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:19:21.852188   29681 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 20:19:21.852196   29681 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 20:19:21.852202   29681 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 20:19:21.852210   29681 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 20:19:21.852216   29681 command_runner.go:130] > [crio.image]
	I1212 20:19:21.852227   29681 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:19:21.852234   29681 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:19:21.852240   29681 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:19:21.852249   29681 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:19:21.852254   29681 command_runner.go:130] > # global_auth_file = ""
	I1212 20:19:21.852259   29681 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:19:21.852266   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:19:21.852272   29681 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 20:19:21.852283   29681 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:19:21.852288   29681 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:19:21.852293   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:19:21.852297   29681 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:19:21.852303   29681 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:19:21.852309   29681 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:19:21.852314   29681 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:19:21.852320   29681 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:19:21.852324   29681 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:19:21.852330   29681 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:19:21.852338   29681 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:19:21.852344   29681 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:19:21.852350   29681 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:19:21.852355   29681 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:19:21.852359   29681 command_runner.go:130] > # signature_policy = ""
	I1212 20:19:21.852367   29681 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:19:21.852375   29681 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:19:21.852381   29681 command_runner.go:130] > # changing them here.
	I1212 20:19:21.852386   29681 command_runner.go:130] > # insecure_registries = [
	I1212 20:19:21.852392   29681 command_runner.go:130] > # ]
	I1212 20:19:21.852403   29681 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:19:21.852410   29681 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:19:21.852417   29681 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:19:21.852422   29681 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:19:21.852429   29681 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:19:21.852435   29681 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:19:21.852441   29681 command_runner.go:130] > # CNI plugins.
	I1212 20:19:21.852445   29681 command_runner.go:130] > [crio.network]
	I1212 20:19:21.852456   29681 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:19:21.852464   29681 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:19:21.852471   29681 command_runner.go:130] > # cni_default_network = ""
	I1212 20:19:21.852477   29681 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:19:21.852484   29681 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:19:21.852489   29681 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:19:21.852496   29681 command_runner.go:130] > # plugin_dirs = [
	I1212 20:19:21.852500   29681 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:19:21.852505   29681 command_runner.go:130] > # ]
	I1212 20:19:21.852511   29681 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:19:21.852516   29681 command_runner.go:130] > [crio.metrics]
	I1212 20:19:21.852523   29681 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:19:21.852529   29681 command_runner.go:130] > enable_metrics = true
	I1212 20:19:21.852534   29681 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:19:21.852541   29681 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:19:21.852547   29681 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:19:21.852556   29681 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:19:21.852562   29681 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:19:21.852570   29681 command_runner.go:130] > # metrics_collectors = [
	I1212 20:19:21.852575   29681 command_runner.go:130] > # 	"operations",
	I1212 20:19:21.852580   29681 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 20:19:21.852587   29681 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 20:19:21.852591   29681 command_runner.go:130] > # 	"operations_errors",
	I1212 20:19:21.852598   29681 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 20:19:21.852603   29681 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 20:19:21.852609   29681 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 20:19:21.852614   29681 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 20:19:21.852620   29681 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 20:19:21.852625   29681 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:19:21.852639   29681 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 20:19:21.852646   29681 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:19:21.852650   29681 command_runner.go:130] > # 	"containers_oom",
	I1212 20:19:21.852655   29681 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:19:21.852659   29681 command_runner.go:130] > # 	"operations_total",
	I1212 20:19:21.852666   29681 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:19:21.852670   29681 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:19:21.852679   29681 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:19:21.852685   29681 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:19:21.852690   29681 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:19:21.852701   29681 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:19:21.852706   29681 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:19:21.852712   29681 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:19:21.852717   29681 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:19:21.852722   29681 command_runner.go:130] > # ]
	I1212 20:19:21.852728   29681 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:19:21.852734   29681 command_runner.go:130] > # metrics_port = 9090
	I1212 20:19:21.852740   29681 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:19:21.852746   29681 command_runner.go:130] > # metrics_socket = ""
	I1212 20:19:21.852752   29681 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:19:21.852763   29681 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:19:21.852777   29681 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:19:21.852787   29681 command_runner.go:130] > # certificate on any modification event.
	I1212 20:19:21.852797   29681 command_runner.go:130] > # metrics_cert = ""
	I1212 20:19:21.852808   29681 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:19:21.852823   29681 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:19:21.852832   29681 command_runner.go:130] > # metrics_key = ""
	I1212 20:19:21.852844   29681 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:19:21.852853   29681 command_runner.go:130] > [crio.tracing]
	I1212 20:19:21.852862   29681 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:19:21.852869   29681 command_runner.go:130] > # enable_tracing = false
	I1212 20:19:21.852875   29681 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:19:21.852881   29681 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 20:19:21.852887   29681 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 20:19:21.852893   29681 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:19:21.852899   29681 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:19:21.852905   29681 command_runner.go:130] > [crio.stats]
	I1212 20:19:21.852911   29681 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:19:21.852919   29681 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:19:21.852926   29681 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:19:21.853041   29681 cni.go:84] Creating CNI manager for ""
	I1212 20:19:21.853058   29681 cni.go:136] 1 nodes found, recommending kindnet
	I1212 20:19:21.853076   29681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:19:21.853099   29681 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-562818 NodeName:multinode-562818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:19:21.853245   29681 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-562818"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:19:21.853304   29681 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-562818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:19:21.853363   29681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 20:19:21.862865   29681 command_runner.go:130] > kubeadm
	I1212 20:19:21.862892   29681 command_runner.go:130] > kubectl
	I1212 20:19:21.862897   29681 command_runner.go:130] > kubelet
	I1212 20:19:21.862921   29681 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 20:19:21.862982   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:19:21.871886   29681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1212 20:19:21.888405   29681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:19:21.905017   29681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 20:19:21.921401   29681 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I1212 20:19:21.925258   29681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:19:21.938240   29681 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818 for IP: 192.168.39.77
	I1212 20:19:21.938273   29681 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:21.938454   29681 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:19:21.938499   29681 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:19:21.938541   29681 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key
	I1212 20:19:21.938558   29681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt with IP's: []
	I1212 20:19:22.055847   29681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt ...
	I1212 20:19:22.055878   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt: {Name:mk6ca797e8003a233f9e8943669b7411f9f4de2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:22.056040   29681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key ...
	I1212 20:19:22.056050   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key: {Name:mkd018515dc883a2786113a6bb39523e3c8a7561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:22.056137   29681 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key.2f0f2646
	I1212 20:19:22.056151   29681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt.2f0f2646 with IP's: [192.168.39.77 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 20:19:22.243566   29681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt.2f0f2646 ...
	I1212 20:19:22.243592   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt.2f0f2646: {Name:mk5a35bd67579a4638a9d0dff9f5da1a145b9ad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:22.243739   29681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key.2f0f2646 ...
	I1212 20:19:22.243752   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key.2f0f2646: {Name:mkb45aa4995ba078d3888978fd949a63e925a841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:22.243813   29681 certs.go:337] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt.2f0f2646 -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt
	I1212 20:19:22.243920   29681 certs.go:341] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key.2f0f2646 -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key
	I1212 20:19:22.243974   29681 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key
	I1212 20:19:22.243988   29681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt with IP's: []
	I1212 20:19:22.582352   29681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt ...
	I1212 20:19:22.582390   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt: {Name:mk98a4c640db1877662d21243742c12a41d020bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:22.582545   29681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key ...
	I1212 20:19:22.582558   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key: {Name:mk4b4df4d5fd909084436bca8dab0a0ea797336a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:22.582632   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:19:22.582650   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:19:22.582659   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:19:22.582679   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:19:22.582691   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:19:22.582702   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:19:22.582714   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:19:22.582726   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:19:22.582771   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:19:22.582815   29681 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:19:22.582825   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:19:22.582844   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:19:22.582868   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:19:22.582893   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:19:22.582928   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:19:22.582960   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem -> /usr/share/ca-certificates/16456.pem
	I1212 20:19:22.582976   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /usr/share/ca-certificates/164562.pem
	I1212 20:19:22.582989   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:19:22.583560   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 20:19:22.609673   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:19:22.635172   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:19:22.660054   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:19:22.684227   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:19:22.708511   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:19:22.733011   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:19:22.756525   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:19:22.780071   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:19:22.803016   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:19:22.826610   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:19:22.850360   29681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:19:22.866551   29681 ssh_runner.go:195] Run: openssl version
	I1212 20:19:22.871846   29681 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 20:19:22.872112   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:19:22.881820   29681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:19:22.886685   29681 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:19:22.886802   29681 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:19:22.886856   29681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:19:22.892378   29681 command_runner.go:130] > 51391683
	I1212 20:19:22.892514   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:19:22.902063   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:19:22.911787   29681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:19:22.916478   29681 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:19:22.916505   29681 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:19:22.916544   29681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:19:22.921928   29681 command_runner.go:130] > 3ec20f2e
	I1212 20:19:22.921984   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:19:22.931513   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:19:22.940865   29681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:19:22.945320   29681 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:19:22.945431   29681 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:19:22.945480   29681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:19:22.950621   29681 command_runner.go:130] > b5213941
	I1212 20:19:22.950692   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:19:22.960112   29681 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:19:22.964646   29681 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:19:22.964813   29681 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:19:22.964868   29681 kubeadm.go:404] StartCluster: {Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:19:22.964934   29681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:19:22.964979   29681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:19:23.010734   29681 cri.go:89] found id: ""
	I1212 20:19:23.010817   29681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:19:23.021259   29681 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 20:19:23.021283   29681 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 20:19:23.021289   29681 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 20:19:23.021409   29681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:19:23.031752   29681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:19:23.040755   29681 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 20:19:23.040787   29681 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 20:19:23.040803   29681 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 20:19:23.040810   29681 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:19:23.040845   29681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:19:23.040882   29681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 20:19:23.144347   29681 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 20:19:23.144387   29681 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 20:19:23.144552   29681 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 20:19:23.144576   29681 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 20:19:23.383325   29681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:19:23.383355   29681 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:19:23.383479   29681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:19:23.383492   29681 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:19:23.383608   29681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 20:19:23.383620   29681 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 20:19:23.619291   29681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:19:23.749036   29681 out.go:204]   - Generating certificates and keys ...
	I1212 20:19:23.619342   29681 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:19:23.749265   29681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 20:19:23.749285   29681 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 20:19:23.749359   29681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 20:19:23.749370   29681 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 20:19:23.749487   29681 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:19:23.749510   29681 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:19:23.910202   29681 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:19:23.910238   29681 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:19:24.053772   29681 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:19:24.053805   29681 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 20:19:24.421032   29681 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 20:19:24.421057   29681 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 20:19:24.651426   29681 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 20:19:24.651456   29681 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 20:19:24.651872   29681 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-562818] and IPs [192.168.39.77 127.0.0.1 ::1]
	I1212 20:19:24.651894   29681 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-562818] and IPs [192.168.39.77 127.0.0.1 ::1]
	I1212 20:19:24.844869   29681 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 20:19:24.844896   29681 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 20:19:24.845009   29681 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-562818] and IPs [192.168.39.77 127.0.0.1 ::1]
	I1212 20:19:24.845019   29681 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-562818] and IPs [192.168.39.77 127.0.0.1 ::1]
	I1212 20:19:24.896620   29681 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:19:24.896650   29681 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:19:25.545624   29681 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:19:25.545676   29681 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:19:25.594860   29681 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 20:19:25.594893   29681 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 20:19:25.595017   29681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:19:25.595032   29681 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:19:25.747806   29681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:19:25.747841   29681 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:19:26.108051   29681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:19:26.108081   29681 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:19:26.431036   29681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:19:26.431062   29681 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:19:26.536969   29681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:19:26.537004   29681 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:19:26.537733   29681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:19:26.537752   29681 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:19:26.542844   29681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:19:26.544700   29681 out.go:204]   - Booting up control plane ...
	I1212 20:19:26.542961   29681 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:19:26.544874   29681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:19:26.544892   29681 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:19:26.545003   29681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:19:26.545017   29681 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:19:26.545516   29681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:19:26.545530   29681 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:19:26.560418   29681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:19:26.560446   29681 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:19:26.561835   29681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:19:26.561853   29681 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:19:26.561897   29681 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 20:19:26.561924   29681 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 20:19:26.696711   29681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 20:19:26.696735   29681 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 20:19:34.197850   29681 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504387 seconds
	I1212 20:19:34.197879   29681 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.504387 seconds
	I1212 20:19:34.198054   29681 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:19:34.198086   29681 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:19:34.219593   29681 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:19:34.219629   29681 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:19:34.753049   29681 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:19:34.753084   29681 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:19:34.753428   29681 kubeadm.go:322] [mark-control-plane] Marking the node multinode-562818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:19:34.753460   29681 command_runner.go:130] > [mark-control-plane] Marking the node multinode-562818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:19:35.274558   29681 kubeadm.go:322] [bootstrap-token] Using token: 54hyng.phn6d6y7fya0pwml
	I1212 20:19:35.274581   29681 command_runner.go:130] > [bootstrap-token] Using token: 54hyng.phn6d6y7fya0pwml
	I1212 20:19:35.276247   29681 out.go:204]   - Configuring RBAC rules ...
	I1212 20:19:35.276379   29681 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:19:35.276394   29681 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:19:35.283595   29681 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:19:35.283612   29681 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:19:35.301686   29681 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:19:35.301725   29681 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:19:35.307921   29681 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:19:35.307956   29681 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:19:35.326206   29681 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:19:35.326234   29681 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:19:35.330880   29681 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:19:35.330917   29681 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:19:35.347378   29681 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:19:35.347399   29681 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:19:35.632358   29681 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 20:19:35.632386   29681 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 20:19:35.690077   29681 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 20:19:35.690111   29681 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 20:19:35.690123   29681 kubeadm.go:322] 
	I1212 20:19:35.690206   29681 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 20:19:35.690225   29681 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 20:19:35.690256   29681 kubeadm.go:322] 
	I1212 20:19:35.690335   29681 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 20:19:35.690346   29681 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 20:19:35.690350   29681 kubeadm.go:322] 
	I1212 20:19:35.690376   29681 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 20:19:35.690387   29681 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 20:19:35.690459   29681 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:19:35.690470   29681 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:19:35.690554   29681 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:19:35.690575   29681 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:19:35.690582   29681 kubeadm.go:322] 
	I1212 20:19:35.690672   29681 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 20:19:35.690684   29681 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 20:19:35.690690   29681 kubeadm.go:322] 
	I1212 20:19:35.690768   29681 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:19:35.690779   29681 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:19:35.690789   29681 kubeadm.go:322] 
	I1212 20:19:35.690869   29681 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 20:19:35.690890   29681 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 20:19:35.690983   29681 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:19:35.690996   29681 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:19:35.691086   29681 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:19:35.691095   29681 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:19:35.691098   29681 kubeadm.go:322] 
	I1212 20:19:35.691163   29681 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:19:35.691170   29681 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:19:35.691353   29681 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 20:19:35.691372   29681 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 20:19:35.691379   29681 kubeadm.go:322] 
	I1212 20:19:35.691476   29681 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 54hyng.phn6d6y7fya0pwml \
	I1212 20:19:35.691487   29681 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 54hyng.phn6d6y7fya0pwml \
	I1212 20:19:35.691604   29681 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 20:19:35.691618   29681 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 20:19:35.691643   29681 kubeadm.go:322] 	--control-plane 
	I1212 20:19:35.691651   29681 command_runner.go:130] > 	--control-plane 
	I1212 20:19:35.691661   29681 kubeadm.go:322] 
	I1212 20:19:35.691759   29681 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:19:35.691787   29681 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:19:35.691866   29681 kubeadm.go:322] 
	I1212 20:19:35.691990   29681 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 54hyng.phn6d6y7fya0pwml \
	I1212 20:19:35.692003   29681 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 54hyng.phn6d6y7fya0pwml \
	I1212 20:19:35.692134   29681 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 20:19:35.692150   29681 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 20:19:35.692346   29681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:19:35.692368   29681 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:19:35.692385   29681 cni.go:84] Creating CNI manager for ""
	I1212 20:19:35.692394   29681 cni.go:136] 1 nodes found, recommending kindnet
	I1212 20:19:35.695149   29681 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 20:19:35.696549   29681 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:19:35.730772   29681 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 20:19:35.730808   29681 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 20:19:35.730819   29681 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 20:19:35.730829   29681 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:19:35.730838   29681 command_runner.go:130] > Access: 2023-12-12 20:19:04.330369533 +0000
	I1212 20:19:35.730846   29681 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 20:19:35.730855   29681 command_runner.go:130] > Change: 2023-12-12 20:19:02.458369533 +0000
	I1212 20:19:35.730865   29681 command_runner.go:130] >  Birth: -
	I1212 20:19:35.730925   29681 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 20:19:35.730939   29681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 20:19:35.771293   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:19:36.805649   29681 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 20:19:36.813819   29681 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 20:19:36.824778   29681 command_runner.go:130] > serviceaccount/kindnet created
	I1212 20:19:36.841359   29681 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 20:19:36.844204   29681 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.072849149s)
	I1212 20:19:36.844252   29681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:19:36.844325   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:36.844349   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=multinode-562818 minikube.k8s.io/updated_at=2023_12_12T20_19_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:36.869802   29681 command_runner.go:130] > -16
	I1212 20:19:36.869876   29681 ops.go:34] apiserver oom_adj: -16
	I1212 20:19:37.048656   29681 command_runner.go:130] > node/multinode-562818 labeled
	I1212 20:19:37.080817   29681 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 20:19:37.080942   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:37.172711   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:37.174330   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:37.284009   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:37.786564   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:37.876729   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:38.286289   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:38.370656   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:38.786292   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:38.872709   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:39.286811   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:39.383477   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:39.785991   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:39.886883   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:40.286436   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:40.372043   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:40.786704   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:40.885494   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:41.286876   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:41.368118   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:41.786617   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:41.869698   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:42.286336   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:42.384566   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:42.786866   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:42.881613   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:43.286150   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:43.366563   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:43.786176   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:43.875263   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:44.286959   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:44.385491   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:44.786349   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:44.869333   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:45.286950   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:45.377446   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:45.785985   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:45.903643   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:46.286740   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:46.383601   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:46.786767   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:46.907388   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:47.286703   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:47.406177   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:47.786832   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:47.902783   29681 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 20:19:48.286755   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:19:48.390326   29681 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 20:19:48.390354   29681 command_runner.go:130] > default   0         1s
	I1212 20:19:48.390420   29681 kubeadm.go:1088] duration metric: took 11.546151005s to wait for elevateKubeSystemPrivileges.
	I1212 20:19:48.390447   29681 kubeadm.go:406] StartCluster complete in 25.425584406s
	I1212 20:19:48.390468   29681 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:48.390557   29681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:19:48.391522   29681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:19:48.391762   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:19:48.391808   29681 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 20:19:48.391882   29681 addons.go:69] Setting storage-provisioner=true in profile "multinode-562818"
	I1212 20:19:48.391906   29681 addons.go:231] Setting addon storage-provisioner=true in "multinode-562818"
	I1212 20:19:48.391911   29681 addons.go:69] Setting default-storageclass=true in profile "multinode-562818"
	I1212 20:19:48.391933   29681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-562818"
	I1212 20:19:48.391987   29681 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:19:48.392028   29681 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:19:48.392174   29681 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:19:48.392399   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:19:48.392445   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:19:48.392738   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:19:48.392866   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:19:48.392863   29681 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:19:48.394204   29681 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 20:19:48.394531   29681 round_trippers.go:463] GET https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:19:48.394550   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:48.394562   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:48.394572   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:48.409486   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1212 20:19:48.409984   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:19:48.410503   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:19:48.410529   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:19:48.410907   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:19:48.411072   29681 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:19:48.413301   29681 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:19:48.413526   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42179
	I1212 20:19:48.413604   29681 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:19:48.413836   29681 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1212 20:19:48.413851   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:48.413861   29681 round_trippers.go:580]     Audit-Id: ab2f94f0-943e-4eb3-a374-a394bda6390e
	I1212 20:19:48.413869   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:48.413877   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:48.413885   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:48.413892   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:48.413902   29681 round_trippers.go:580]     Content-Length: 291
	I1212 20:19:48.413909   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:48 GMT
	I1212 20:19:48.413944   29681 addons.go:231] Setting addon default-storageclass=true in "multinode-562818"
	I1212 20:19:48.413963   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:19:48.413978   29681 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:19:48.413985   29681 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"350","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 20:19:48.414386   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:19:48.414410   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:19:48.414418   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:19:48.414440   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:19:48.414448   29681 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"350","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 20:19:48.414520   29681 round_trippers.go:463] PUT https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:19:48.414531   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:48.414542   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:48.414552   29681 round_trippers.go:473]     Content-Type: application/json
	I1212 20:19:48.414562   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:48.414795   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:19:48.415272   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:19:48.415303   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:19:48.426779   29681 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 20:19:48.426811   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:48.426822   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:48.426832   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:48.426839   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:48.426846   29681 round_trippers.go:580]     Content-Length: 291
	I1212 20:19:48.426854   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:48 GMT
	I1212 20:19:48.426869   29681 round_trippers.go:580]     Audit-Id: 8980de35-bdd9-41d6-8983-31adb50c85d3
	I1212 20:19:48.426887   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:48.426922   29681 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"351","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 20:19:48.427103   29681 round_trippers.go:463] GET https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:19:48.427123   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:48.427136   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:48.427148   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:48.429025   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I1212 20:19:48.429240   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 20:19:48.429480   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:19:48.429600   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:19:48.429928   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:19:48.429955   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:19:48.430078   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:19:48.430102   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:19:48.430305   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:19:48.430385   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:19:48.430461   29681 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:19:48.430947   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:19:48.430990   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:19:48.432047   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:48.434404   29681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:19:48.436125   29681 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:19:48.436147   29681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:19:48.436168   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:48.436189   29681 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 20:19:48.436208   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:48.436222   29681 round_trippers.go:580]     Audit-Id: 533a49b5-5530-4e00-b500-55558eff6ee9
	I1212 20:19:48.436239   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:48.436249   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:48.436258   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:48.436268   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:48.436282   29681 round_trippers.go:580]     Content-Length: 291
	I1212 20:19:48.436302   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:48 GMT
	I1212 20:19:48.436595   29681 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"351","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 20:19:48.436716   29681 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-562818" context rescaled to 1 replicas
	I1212 20:19:48.436754   29681 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:19:48.438529   29681 out.go:177] * Verifying Kubernetes components...
	I1212 20:19:48.440327   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:19:48.439641   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:48.440405   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:48.440431   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:48.440201   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:48.440606   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:48.440766   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:48.440926   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:19:48.448192   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I1212 20:19:48.448635   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:19:48.449124   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:19:48.449148   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:19:48.449436   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:19:48.449628   29681 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:19:48.451451   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:19:48.451708   29681 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:19:48.451723   29681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:19:48.451736   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:19:48.454744   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:48.455140   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:19:48.455174   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:19:48.455347   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:19:48.455551   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:19:48.455735   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:19:48.455868   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:19:48.582861   29681 command_runner.go:130] > apiVersion: v1
	I1212 20:19:48.582883   29681 command_runner.go:130] > data:
	I1212 20:19:48.582893   29681 command_runner.go:130] >   Corefile: |
	I1212 20:19:48.582898   29681 command_runner.go:130] >     .:53 {
	I1212 20:19:48.582904   29681 command_runner.go:130] >         errors
	I1212 20:19:48.582912   29681 command_runner.go:130] >         health {
	I1212 20:19:48.582920   29681 command_runner.go:130] >            lameduck 5s
	I1212 20:19:48.582925   29681 command_runner.go:130] >         }
	I1212 20:19:48.582930   29681 command_runner.go:130] >         ready
	I1212 20:19:48.582941   29681 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 20:19:48.582974   29681 command_runner.go:130] >            pods insecure
	I1212 20:19:48.582991   29681 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 20:19:48.583000   29681 command_runner.go:130] >            ttl 30
	I1212 20:19:48.583006   29681 command_runner.go:130] >         }
	I1212 20:19:48.583018   29681 command_runner.go:130] >         prometheus :9153
	I1212 20:19:48.583026   29681 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 20:19:48.583038   29681 command_runner.go:130] >            max_concurrent 1000
	I1212 20:19:48.583048   29681 command_runner.go:130] >         }
	I1212 20:19:48.583055   29681 command_runner.go:130] >         cache 30
	I1212 20:19:48.583065   29681 command_runner.go:130] >         loop
	I1212 20:19:48.583074   29681 command_runner.go:130] >         reload
	I1212 20:19:48.583085   29681 command_runner.go:130] >         loadbalance
	I1212 20:19:48.583092   29681 command_runner.go:130] >     }
	I1212 20:19:48.583102   29681 command_runner.go:130] > kind: ConfigMap
	I1212 20:19:48.583112   29681 command_runner.go:130] > metadata:
	I1212 20:19:48.583125   29681 command_runner.go:130] >   creationTimestamp: "2023-12-12T20:19:35Z"
	I1212 20:19:48.583136   29681 command_runner.go:130] >   name: coredns
	I1212 20:19:48.583146   29681 command_runner.go:130] >   namespace: kube-system
	I1212 20:19:48.583158   29681 command_runner.go:130] >   resourceVersion: "231"
	I1212 20:19:48.583170   29681 command_runner.go:130] >   uid: 9a863f66-aa0a-4fa3-b434-57a840a88dcb
	I1212 20:19:48.584503   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:19:48.584843   29681 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:19:48.585161   29681 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:19:48.585486   29681 node_ready.go:35] waiting up to 6m0s for node "multinode-562818" to be "Ready" ...
	I1212 20:19:48.585595   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:48.585612   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:48.585623   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:48.585632   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:48.593001   29681 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 20:19:48.593022   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:48.593031   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:48 GMT
	I1212 20:19:48.593040   29681 round_trippers.go:580]     Audit-Id: b6b14137-8344-4339-94dd-c662a9dd8e12
	I1212 20:19:48.593047   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:48.593055   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:48.593062   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:48.593070   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:48.593248   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:48.594020   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:48.594040   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:48.594059   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:48.594072   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:48.598054   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:48.598074   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:48.598082   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:48.598090   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:48 GMT
	I1212 20:19:48.598098   29681 round_trippers.go:580]     Audit-Id: bea44cdd-5fc1-4122-aeb1-4cec95bd377a
	I1212 20:19:48.598107   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:48.598117   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:48.598125   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:48.598261   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:48.617322   29681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:19:48.636283   29681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:19:49.099625   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:49.099661   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:49.099675   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:49.099685   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:49.115995   29681 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1212 20:19:49.116023   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:49.116036   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:49.116044   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:49 GMT
	I1212 20:19:49.116052   29681 round_trippers.go:580]     Audit-Id: 96ef8a8c-6ca0-49bf-9cc1-4e5d1cf5d93f
	I1212 20:19:49.116079   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:49.116086   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:49.116093   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:49.116337   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:49.207965   29681 command_runner.go:130] > configmap/coredns replaced
	I1212 20:19:49.210604   29681 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 20:19:49.210602   29681 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 20:19:49.210664   29681 main.go:141] libmachine: Making call to close driver server
	I1212 20:19:49.210682   29681 main.go:141] libmachine: (multinode-562818) Calling .Close
	I1212 20:19:49.210960   29681 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:19:49.210978   29681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:19:49.210988   29681 main.go:141] libmachine: Making call to close driver server
	I1212 20:19:49.210988   29681 main.go:141] libmachine: (multinode-562818) DBG | Closing plugin on server side
	I1212 20:19:49.210998   29681 main.go:141] libmachine: (multinode-562818) Calling .Close
	I1212 20:19:49.211255   29681 main.go:141] libmachine: (multinode-562818) DBG | Closing plugin on server side
	I1212 20:19:49.211334   29681 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:19:49.211361   29681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:19:49.211474   29681 round_trippers.go:463] GET https://192.168.39.77:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 20:19:49.211488   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:49.211499   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:49.211507   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:49.214517   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:49.214540   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:49.214551   29681 round_trippers.go:580]     Content-Length: 1273
	I1212 20:19:49.214560   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:49 GMT
	I1212 20:19:49.214569   29681 round_trippers.go:580]     Audit-Id: 6ea5ce5b-c71c-45cb-b7fa-1f2235050b79
	I1212 20:19:49.214577   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:49.214589   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:49.214606   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:49.214619   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:49.214682   29681 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"364"},"items":[{"metadata":{"name":"standard","uid":"2e4cd5b8-ce5c-4b82-96c2-bbd8fa9222e5","resourceVersion":"362","creationTimestamp":"2023-12-12T20:19:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:19:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 20:19:49.215170   29681 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2e4cd5b8-ce5c-4b82-96c2-bbd8fa9222e5","resourceVersion":"362","creationTimestamp":"2023-12-12T20:19:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:19:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 20:19:49.215253   29681 round_trippers.go:463] PUT https://192.168.39.77:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 20:19:49.215266   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:49.215276   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:49.215291   29681 round_trippers.go:473]     Content-Type: application/json
	I1212 20:19:49.215302   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:49.218205   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:49.218221   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:49.218230   29681 round_trippers.go:580]     Audit-Id: f9cc139e-2ddd-4318-8745-8f6caeaec8ab
	I1212 20:19:49.218237   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:49.218244   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:49.218252   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:49.218260   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:49.218273   29681 round_trippers.go:580]     Content-Length: 1220
	I1212 20:19:49.218283   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:49 GMT
	I1212 20:19:49.218323   29681 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2e4cd5b8-ce5c-4b82-96c2-bbd8fa9222e5","resourceVersion":"362","creationTimestamp":"2023-12-12T20:19:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:19:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 20:19:49.218441   29681 main.go:141] libmachine: Making call to close driver server
	I1212 20:19:49.218456   29681 main.go:141] libmachine: (multinode-562818) Calling .Close
	I1212 20:19:49.218727   29681 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:19:49.218748   29681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:19:49.324085   29681 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 20:19:49.336441   29681 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 20:19:49.351443   29681 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 20:19:49.361598   29681 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 20:19:49.370041   29681 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 20:19:49.383558   29681 command_runner.go:130] > pod/storage-provisioner created
	I1212 20:19:49.386249   29681 main.go:141] libmachine: Making call to close driver server
	I1212 20:19:49.386280   29681 main.go:141] libmachine: (multinode-562818) Calling .Close
	I1212 20:19:49.386613   29681 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:19:49.386657   29681 main.go:141] libmachine: (multinode-562818) DBG | Closing plugin on server side
	I1212 20:19:49.386671   29681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:19:49.386686   29681 main.go:141] libmachine: Making call to close driver server
	I1212 20:19:49.386697   29681 main.go:141] libmachine: (multinode-562818) Calling .Close
	I1212 20:19:49.386948   29681 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:19:49.386963   29681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:19:49.388857   29681 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 20:19:49.390385   29681 addons.go:502] enable addons completed in 998.581411ms: enabled=[default-storageclass storage-provisioner]
	I1212 20:19:49.599708   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:49.599738   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:49.599749   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:49.599755   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:49.602486   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:49.602520   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:49.602533   29681 round_trippers.go:580]     Audit-Id: db821abc-3df2-408f-a5ed-4ae064320dbe
	I1212 20:19:49.602541   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:49.602550   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:49.602558   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:49.602569   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:49.602574   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:49 GMT
	I1212 20:19:49.602741   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:50.099450   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:50.099482   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:50.099491   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:50.099497   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:50.102615   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:50.102635   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:50.102641   29681 round_trippers.go:580]     Audit-Id: 7075c83b-067a-4b67-b91f-ae37c1f5ff02
	I1212 20:19:50.102646   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:50.102651   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:50.102657   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:50.102662   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:50.102666   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:50 GMT
	I1212 20:19:50.103766   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:50.599518   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:50.599581   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:50.599590   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:50.599597   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:50.602245   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:50.602264   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:50.602270   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:50.602276   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:50.602284   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:50.602289   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:50 GMT
	I1212 20:19:50.602295   29681 round_trippers.go:580]     Audit-Id: 60ea18f8-9d2a-4a87-b980-c982170325e0
	I1212 20:19:50.602303   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:50.602509   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:50.602806   29681 node_ready.go:58] node "multinode-562818" has status "Ready":"False"
	I1212 20:19:51.099342   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:51.099369   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:51.099378   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:51.099385   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:51.103331   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:51.103359   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:51.103367   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:51.103373   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:51 GMT
	I1212 20:19:51.103378   29681 round_trippers.go:580]     Audit-Id: ad465ac1-5660-4b63-88f2-31763302c9b6
	I1212 20:19:51.103384   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:51.103389   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:51.103398   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:51.103722   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:51.599271   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:51.599305   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:51.599314   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:51.599320   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:51.602225   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:51.602251   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:51.602259   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:51 GMT
	I1212 20:19:51.602269   29681 round_trippers.go:580]     Audit-Id: ee5d6299-cede-46b0-b3e3-bb8a9dd99150
	I1212 20:19:51.602278   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:51.602287   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:51.602297   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:51.602305   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:51.602470   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:52.099112   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:52.099147   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:52.099155   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:52.099161   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:52.101674   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:52.101699   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:52.101705   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:52.101710   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:52.101715   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:52.101720   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:52 GMT
	I1212 20:19:52.101725   29681 round_trippers.go:580]     Audit-Id: f4a89396-54ea-4068-a186-a0156d5ad776
	I1212 20:19:52.101730   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:52.101911   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:52.599674   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:52.599705   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:52.599714   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:52.599720   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:52.602707   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:52.602737   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:52.602747   29681 round_trippers.go:580]     Audit-Id: 2c90af42-f7aa-428b-8a2b-b3fd3280283a
	I1212 20:19:52.602760   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:52.602768   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:52.602776   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:52.602788   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:52.602794   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:52 GMT
	I1212 20:19:52.604088   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:52.604390   29681 node_ready.go:58] node "multinode-562818" has status "Ready":"False"
	I1212 20:19:53.098723   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:53.098747   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:53.098755   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:53.098761   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:53.101437   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:53.101460   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:53.101468   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:53.101475   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:53.101489   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:53 GMT
	I1212 20:19:53.101497   29681 round_trippers.go:580]     Audit-Id: baae5b0f-6097-4ffb-b180-8c79559a2544
	I1212 20:19:53.101507   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:53.101514   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:53.101936   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:53.599320   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:53.599347   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:53.599356   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:53.599362   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:53.602265   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:53.602289   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:53.602296   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:53.602302   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:53.602307   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:53 GMT
	I1212 20:19:53.602312   29681 round_trippers.go:580]     Audit-Id: 10594b52-43f7-4689-b4e0-2a2304b6ebc5
	I1212 20:19:53.602317   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:53.602322   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:53.602475   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:54.098853   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:54.098886   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:54.098897   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:54.098906   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:54.102060   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:54.102085   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:54.102092   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:54.102098   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:54 GMT
	I1212 20:19:54.102104   29681 round_trippers.go:580]     Audit-Id: 2c793a8a-7c76-4bcd-adfe-75ff7953496c
	I1212 20:19:54.102109   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:54.102117   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:54.102126   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:54.102291   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:54.598923   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:54.598955   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:54.598964   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:54.598972   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:54.601846   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:54.601877   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:54.601887   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:54.601895   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:54.601903   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:54.601911   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:54 GMT
	I1212 20:19:54.601919   29681 round_trippers.go:580]     Audit-Id: a40936f2-0ef9-47d4-9578-786c62af53c0
	I1212 20:19:54.601927   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:54.602333   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"314","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 20:19:55.098974   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:55.099003   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.099018   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.099026   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.101803   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:55.101833   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.101844   29681 round_trippers.go:580]     Audit-Id: 3fd71e6c-9e52-4b57-bff0-be97a624e13f
	I1212 20:19:55.101853   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.101859   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.101867   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.101877   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.101885   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.102444   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:55.102835   29681 node_ready.go:49] node "multinode-562818" has status "Ready":"True"
	I1212 20:19:55.102856   29681 node_ready.go:38] duration metric: took 6.51734227s waiting for node "multinode-562818" to be "Ready" ...
	I1212 20:19:55.102865   29681 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:19:55.102941   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:19:55.102951   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.102965   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.102976   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.106351   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:55.106380   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.106389   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.106395   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.106400   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.106408   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.106413   29681 round_trippers.go:580]     Audit-Id: 0cb1eaf1-582a-4525-9e2e-66d99b439700
	I1212 20:19:55.106418   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.107212   29681 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"395","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54553 chars]
	I1212 20:19:55.111411   29681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:55.111487   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:19:55.111495   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.111503   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.111509   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.114025   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:55.114043   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.114050   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.114056   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.114061   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.114066   29681 round_trippers.go:580]     Audit-Id: 6b616a3c-760e-4878-a4b4-17c8a8685570
	I1212 20:19:55.114072   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.114077   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.114541   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"395","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 20:19:55.115045   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:55.115060   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.115072   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.115082   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.117492   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:55.117513   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.117523   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.117529   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.117534   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.117540   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.117545   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.117553   29681 round_trippers.go:580]     Audit-Id: 4569eb1d-7251-466c-8496-e30d18c55591
	I1212 20:19:55.117909   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:55.118235   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:19:55.118246   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.118253   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.118259   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.122136   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:55.122159   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.122171   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.122177   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.122182   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.122187   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.122192   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.122197   29681 round_trippers.go:580]     Audit-Id: f362648b-8794-408b-a8f6-72df28526749
	I1212 20:19:55.122341   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"395","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 20:19:55.122729   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:55.122740   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.122756   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.122762   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.125469   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:55.125484   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.125492   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.125501   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.125509   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.125518   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.125526   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.125533   29681 round_trippers.go:580]     Audit-Id: 0fefb26b-023a-4959-b242-636eb8657853
	I1212 20:19:55.126367   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:55.627139   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:19:55.627163   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.627171   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.627177   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.629567   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:55.629593   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.629615   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.629623   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.629631   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.629638   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.629646   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.629654   29681 round_trippers.go:580]     Audit-Id: f10c5c9e-162e-41ab-8cac-21040328ba25
	I1212 20:19:55.629800   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"395","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 20:19:55.630367   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:55.630388   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:55.630399   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:55.630408   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:55.632309   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:55.632329   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:55.632338   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:55.632349   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:55.632357   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:55.632366   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:55.632375   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:55 GMT
	I1212 20:19:55.632380   29681 round_trippers.go:580]     Audit-Id: 482c937e-7118-493d-b814-4c684377ec91
	I1212 20:19:55.632639   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:56.127302   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:19:56.127326   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:56.127337   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:56.127346   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:56.131947   29681 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:19:56.131976   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:56.131986   29681 round_trippers.go:580]     Audit-Id: b07bffcc-0d26-4831-94db-d089af51671a
	I1212 20:19:56.131994   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:56.132002   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:56.132010   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:56.132018   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:56.132029   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:56 GMT
	I1212 20:19:56.132236   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"395","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 20:19:56.132662   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:56.132678   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:56.132686   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:56.132692   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:56.135272   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:56.135293   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:56.135303   29681 round_trippers.go:580]     Audit-Id: 22fbe312-3a9c-4952-8748-cf5cde43c59e
	I1212 20:19:56.135311   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:56.135319   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:56.135327   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:56.135335   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:56.135347   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:56 GMT
	I1212 20:19:56.135877   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:56.627597   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:19:56.627622   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:56.627630   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:56.627636   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:56.630889   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:56.630908   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:56.630915   29681 round_trippers.go:580]     Audit-Id: 5d0a96b4-856c-4839-87fd-ef4f7e51a2e2
	I1212 20:19:56.630921   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:56.630926   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:56.630931   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:56.630936   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:56.630943   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:56 GMT
	I1212 20:19:56.631802   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"395","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 20:19:56.632205   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:56.632217   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:56.632223   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:56.632229   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:56.635142   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:56.635163   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:56.635173   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:56.635181   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:56 GMT
	I1212 20:19:56.635193   29681 round_trippers.go:580]     Audit-Id: 75b0ec33-fcf3-4b5c-a9c4-e4755b1c69bf
	I1212 20:19:56.635201   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:56.635209   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:56.635219   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:56.636116   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.127799   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:19:57.127824   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.127833   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.127839   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.130679   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:57.130713   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.130725   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.130737   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.130752   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.130763   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.130775   29681 round_trippers.go:580]     Audit-Id: 0cd5f9b2-7d27-43ef-b580-b4db7b5890a4
	I1212 20:19:57.130786   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.130963   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"410","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 20:19:57.131395   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.131408   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.131415   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.131421   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.133432   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.133451   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.133460   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.133468   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.133480   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.133489   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.133500   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.133511   29681 round_trippers.go:580]     Audit-Id: e2293c95-70aa-4b8f-8ae2-778937a1a67a
	I1212 20:19:57.133935   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.134219   29681 pod_ready.go:92] pod "coredns-5dd5756b68-689lp" in "kube-system" namespace has status "Ready":"True"
	I1212 20:19:57.134237   29681 pod_ready.go:81] duration metric: took 2.022798968s waiting for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.134245   29681 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.134289   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-562818
	I1212 20:19:57.134297   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.134303   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.134309   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.136177   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.136194   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.136203   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.136211   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.136223   29681 round_trippers.go:580]     Audit-Id: 6a6eadc1-b6c5-4215-be8b-5ee37635cd63
	I1212 20:19:57.136232   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.136249   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.136255   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.136527   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"363","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 20:19:57.136830   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.136841   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.136847   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.136853   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.138682   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.138707   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.138717   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.138727   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.138736   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.138752   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.138760   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.138769   29681 round_trippers.go:580]     Audit-Id: 004accc0-ae8e-49d2-9a89-43b33857c6be
	I1212 20:19:57.139006   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.139293   29681 pod_ready.go:92] pod "etcd-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:19:57.139318   29681 pod_ready.go:81] duration metric: took 5.06657ms waiting for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.139328   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.139368   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:19:57.139375   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.139381   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.139387   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.141160   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.141178   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.141188   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.141196   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.141211   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.141218   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.141232   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.141244   29681 round_trippers.go:580]     Audit-Id: 89f0ab5d-d57e-49da-93db-09095827d932
	I1212 20:19:57.141392   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"398","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 20:19:57.141738   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.141753   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.141763   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.141771   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.143269   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.143285   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.143294   29681 round_trippers.go:580]     Audit-Id: e525dcd9-0247-4267-b2f2-79966ddeef1a
	I1212 20:19:57.143301   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.143309   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.143318   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.143325   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.143337   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.143507   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.143749   29681 pod_ready.go:92] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:19:57.143761   29681 pod_ready.go:81] duration metric: took 4.427364ms waiting for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.143769   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.143812   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-562818
	I1212 20:19:57.143830   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.143837   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.143843   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.145473   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.145488   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.145497   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.145508   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.145521   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.145534   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.145546   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.145558   29681 round_trippers.go:580]     Audit-Id: 028f9372-dc2a-4b7f-9758-b997a857ced3
	I1212 20:19:57.145769   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-562818","namespace":"kube-system","uid":"23b73a4b-e188-4b7c-a13d-1fd61862a4e1","resourceVersion":"399","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.mirror":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.seen":"2023-12-12T20:19:35.712598374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 20:19:57.146123   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.146138   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.146145   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.146152   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.147835   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.147849   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.147858   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.147866   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.147875   29681 round_trippers.go:580]     Audit-Id: 62b23a7e-038c-486d-8360-7162f804fd69
	I1212 20:19:57.147884   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.147894   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.147908   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.148038   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.148321   29681 pod_ready.go:92] pod "kube-controller-manager-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:19:57.148337   29681 pod_ready.go:81] duration metric: took 4.561678ms waiting for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.148350   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.299739   29681 request.go:629] Waited for 151.330346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:19:57.299823   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:19:57.299831   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.299843   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.299857   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.302574   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:57.302594   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.302604   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.302612   29681 round_trippers.go:580]     Audit-Id: afd6eff4-95e2-433f-b6b9-d7d7037cd8ea
	I1212 20:19:57.302620   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.302629   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.302638   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.302648   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.302807   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4rrmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"2bcd718f-0c7c-461a-895e-44a0c1d566fd","resourceVersion":"378","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 20:19:57.499565   29681 request.go:629] Waited for 196.369315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.499628   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.499639   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.499646   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.499655   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.502382   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:57.502406   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.502415   29681 round_trippers.go:580]     Audit-Id: 0578fd89-ab88-4220-adb8-f8089b491863
	I1212 20:19:57.502424   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.502433   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.502443   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.502453   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.502465   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.502657   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.503065   29681 pod_ready.go:92] pod "kube-proxy-4rrmn" in "kube-system" namespace has status "Ready":"True"
	I1212 20:19:57.503085   29681 pod_ready.go:81] duration metric: took 354.729907ms waiting for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.503095   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.699544   29681 request.go:629] Waited for 196.382744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:19:57.699633   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:19:57.699641   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.699657   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.699676   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.702310   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:57.702328   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.702335   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.702340   29681 round_trippers.go:580]     Audit-Id: 51fb30a1-3fa6-4e57-bb7a-e9410d2cfca5
	I1212 20:19:57.702346   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.702353   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.702358   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.702366   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.702514   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"400","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 20:19:57.899255   29681 request.go:629] Waited for 196.346706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.899327   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:19:57.899332   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.899339   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.899345   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.902077   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:57.902101   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.902110   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.902118   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.902125   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.902134   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.902149   29681 round_trippers.go:580]     Audit-Id: de98a600-18a7-447a-ad2a-05dcf1eb7999
	I1212 20:19:57.902161   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.902376   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:19:57.902703   29681 pod_ready.go:92] pod "kube-scheduler-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:19:57.902721   29681 pod_ready.go:81] duration metric: took 399.614546ms waiting for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:19:57.902732   29681 pod_ready.go:38] duration metric: took 2.799851825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:19:57.902752   29681 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:19:57.902801   29681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:19:57.920289   29681 command_runner.go:130] > 1099
	I1212 20:19:57.920322   29681 api_server.go:72] duration metric: took 9.483538277s to wait for apiserver process to appear ...
	I1212 20:19:57.920334   29681 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:19:57.920353   29681 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:19:57.928702   29681 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I1212 20:19:57.928781   29681 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I1212 20:19:57.928792   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:57.928799   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:57.928808   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:57.929909   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:19:57.929930   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:57.929939   29681 round_trippers.go:580]     Audit-Id: baa2be07-14fd-4c84-8371-cfa262e1f151
	I1212 20:19:57.929947   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:57.929954   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:57.929961   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:57.929968   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:57.929976   29681 round_trippers.go:580]     Content-Length: 264
	I1212 20:19:57.929988   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:57 GMT
	I1212 20:19:57.930007   29681 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 20:19:57.930083   29681 api_server.go:141] control plane version: v1.28.4
	I1212 20:19:57.930099   29681 api_server.go:131] duration metric: took 9.759941ms to wait for apiserver health ...
	I1212 20:19:57.930106   29681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:19:58.099593   29681 request.go:629] Waited for 169.391805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:19:58.099656   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:19:58.099663   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:58.099671   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:58.099678   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:58.103150   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:58.103177   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:58.103188   29681 round_trippers.go:580]     Audit-Id: 7c5331a8-4cce-4ea4-9d3a-df6563849107
	I1212 20:19:58.103199   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:58.103208   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:58.103218   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:58.103225   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:58.103230   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:58 GMT
	I1212 20:19:58.104631   29681 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"410","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1212 20:19:58.106199   29681 system_pods.go:59] 8 kube-system pods found
	I1212 20:19:58.106228   29681 system_pods.go:61] "coredns-5dd5756b68-689lp" [e77852fc-eb8a-4027-98e1-070b4ca43f54] Running
	I1212 20:19:58.106237   29681 system_pods.go:61] "etcd-multinode-562818" [5a874e4d-12ab-400c-8086-05073ffd1b13] Running
	I1212 20:19:58.106247   29681 system_pods.go:61] "kindnet-24p9c" [e80eb9ab-2919-4be1-890d-34c26202f7fc] Running
	I1212 20:19:58.106253   29681 system_pods.go:61] "kube-apiserver-multinode-562818" [7d766a87-0f52-46ef-b1fb-392a197bca9a] Running
	I1212 20:19:58.106259   29681 system_pods.go:61] "kube-controller-manager-multinode-562818" [23b73a4b-e188-4b7c-a13d-1fd61862a4e1] Running
	I1212 20:19:58.106262   29681 system_pods.go:61] "kube-proxy-4rrmn" [2bcd718f-0c7c-461a-895e-44a0c1d566fd] Running
	I1212 20:19:58.106266   29681 system_pods.go:61] "kube-scheduler-multinode-562818" [994614e5-3a18-422e-86ad-54c67237293d] Running
	I1212 20:19:58.106272   29681 system_pods.go:61] "storage-provisioner" [9efe55ce-d87d-4074-9983-d880908d6d3d] Running
	I1212 20:19:58.106277   29681 system_pods.go:74] duration metric: took 176.166869ms to wait for pod list to return data ...
	I1212 20:19:58.106284   29681 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:19:58.299753   29681 request.go:629] Waited for 193.385718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I1212 20:19:58.299811   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I1212 20:19:58.299816   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:58.299823   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:58.299830   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:58.302341   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:19:58.302359   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:58.302365   29681 round_trippers.go:580]     Audit-Id: f75f66e8-fc0f-4c02-b2e0-b2f8532a705c
	I1212 20:19:58.302371   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:58.302376   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:58.302389   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:58.302395   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:58.302400   29681 round_trippers.go:580]     Content-Length: 261
	I1212 20:19:58.302405   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:58 GMT
	I1212 20:19:58.302421   29681 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"250ddd20-38f2-4339-8143-a461b27c59d0","resourceVersion":"315","creationTimestamp":"2023-12-12T20:19:47Z"}}]}
	I1212 20:19:58.302581   29681 default_sa.go:45] found service account: "default"
	I1212 20:19:58.302596   29681 default_sa.go:55] duration metric: took 196.304308ms for default service account to be created ...
	I1212 20:19:58.302603   29681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:19:58.499233   29681 request.go:629] Waited for 196.566991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:19:58.499330   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:19:58.499337   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:58.499345   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:58.499351   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:58.503097   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:19:58.503122   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:58.503129   29681 round_trippers.go:580]     Audit-Id: 09e8aed8-9d7b-4469-8413-61e916e0aec7
	I1212 20:19:58.503134   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:58.503142   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:58.503151   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:58.503158   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:58.503227   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:58 GMT
	I1212 20:19:58.504969   29681 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"410","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1212 20:19:58.506547   29681 system_pods.go:86] 8 kube-system pods found
	I1212 20:19:58.506564   29681 system_pods.go:89] "coredns-5dd5756b68-689lp" [e77852fc-eb8a-4027-98e1-070b4ca43f54] Running
	I1212 20:19:58.506569   29681 system_pods.go:89] "etcd-multinode-562818" [5a874e4d-12ab-400c-8086-05073ffd1b13] Running
	I1212 20:19:58.506573   29681 system_pods.go:89] "kindnet-24p9c" [e80eb9ab-2919-4be1-890d-34c26202f7fc] Running
	I1212 20:19:58.506577   29681 system_pods.go:89] "kube-apiserver-multinode-562818" [7d766a87-0f52-46ef-b1fb-392a197bca9a] Running
	I1212 20:19:58.506584   29681 system_pods.go:89] "kube-controller-manager-multinode-562818" [23b73a4b-e188-4b7c-a13d-1fd61862a4e1] Running
	I1212 20:19:58.506587   29681 system_pods.go:89] "kube-proxy-4rrmn" [2bcd718f-0c7c-461a-895e-44a0c1d566fd] Running
	I1212 20:19:58.506591   29681 system_pods.go:89] "kube-scheduler-multinode-562818" [994614e5-3a18-422e-86ad-54c67237293d] Running
	I1212 20:19:58.506595   29681 system_pods.go:89] "storage-provisioner" [9efe55ce-d87d-4074-9983-d880908d6d3d] Running
	I1212 20:19:58.506600   29681 system_pods.go:126] duration metric: took 203.99275ms to wait for k8s-apps to be running ...
	I1212 20:19:58.506609   29681 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:19:58.506651   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:19:58.521523   29681 system_svc.go:56] duration metric: took 14.903721ms WaitForService to wait for kubelet.
	I1212 20:19:58.521554   29681 kubeadm.go:581] duration metric: took 10.084769887s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:19:58.521575   29681 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:19:58.700016   29681 request.go:629] Waited for 178.35373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I1212 20:19:58.700073   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I1212 20:19:58.700077   29681 round_trippers.go:469] Request Headers:
	I1212 20:19:58.700085   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:19:58.700091   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:19:58.704136   29681 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:19:58.704161   29681 round_trippers.go:577] Response Headers:
	I1212 20:19:58.704173   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:19:58.704180   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:19:58 GMT
	I1212 20:19:58.704188   29681 round_trippers.go:580]     Audit-Id: b193971d-ed46-4998-824a-f78b1e73584d
	I1212 20:19:58.704196   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:19:58.704203   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:19:58.704211   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:19:58.704448   29681 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I1212 20:19:58.704776   29681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:19:58.704799   29681 node_conditions.go:123] node cpu capacity is 2
	I1212 20:19:58.704809   29681 node_conditions.go:105] duration metric: took 183.228831ms to run NodePressure ...
	I1212 20:19:58.704819   29681 start.go:228] waiting for startup goroutines ...
	I1212 20:19:58.704825   29681 start.go:233] waiting for cluster config update ...
	I1212 20:19:58.704838   29681 start.go:242] writing updated cluster config ...
	I1212 20:19:58.707042   29681 out.go:177] 
	I1212 20:19:58.708502   29681 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:19:58.708567   29681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:19:58.710224   29681 out.go:177] * Starting worker node multinode-562818-m02 in cluster multinode-562818
	I1212 20:19:58.711389   29681 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:19:58.711413   29681 cache.go:56] Caching tarball of preloaded images
	I1212 20:19:58.711507   29681 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:19:58.711520   29681 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 20:19:58.711605   29681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:19:58.711802   29681 start.go:365] acquiring machines lock for multinode-562818-m02: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:19:58.711851   29681 start.go:369] acquired machines lock for "multinode-562818-m02" in 26.087µs
	I1212 20:19:58.711874   29681 start.go:93] Provisioning new machine with config: &{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:19:58.711966   29681 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1212 20:19:58.713702   29681 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 20:19:58.713806   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:19:58.713831   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:19:58.727944   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1212 20:19:58.728422   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:19:58.728919   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:19:58.728933   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:19:58.729227   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:19:58.729387   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:19:58.729513   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:19:58.729655   29681 start.go:159] libmachine.API.Create for "multinode-562818" (driver="kvm2")
	I1212 20:19:58.729669   29681 client.go:168] LocalClient.Create starting
	I1212 20:19:58.729700   29681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem
	I1212 20:19:58.729736   29681 main.go:141] libmachine: Decoding PEM data...
	I1212 20:19:58.729753   29681 main.go:141] libmachine: Parsing certificate...
	I1212 20:19:58.729800   29681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem
	I1212 20:19:58.729818   29681 main.go:141] libmachine: Decoding PEM data...
	I1212 20:19:58.729831   29681 main.go:141] libmachine: Parsing certificate...
	I1212 20:19:58.729847   29681 main.go:141] libmachine: Running pre-create checks...
	I1212 20:19:58.729855   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .PreCreateCheck
	I1212 20:19:58.730014   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetConfigRaw
	I1212 20:19:58.730361   29681 main.go:141] libmachine: Creating machine...
	I1212 20:19:58.730374   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .Create
	I1212 20:19:58.730489   29681 main.go:141] libmachine: (multinode-562818-m02) Creating KVM machine...
	I1212 20:19:58.731630   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found existing default KVM network
	I1212 20:19:58.731728   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found existing private KVM network mk-multinode-562818
	I1212 20:19:58.731844   29681 main.go:141] libmachine: (multinode-562818-m02) Setting up store path in /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02 ...
	I1212 20:19:58.731861   29681 main.go:141] libmachine: (multinode-562818-m02) Building disk image from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 20:19:58.731972   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:19:58.731846   30038 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:19:58.732064   29681 main.go:141] libmachine: (multinode-562818-m02) Downloading /home/jenkins/minikube-integration/17734-9188/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 20:19:58.935206   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:19:58.935064   30038 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa...
	I1212 20:19:59.008832   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:19:59.008725   30038 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/multinode-562818-m02.rawdisk...
	I1212 20:19:59.008877   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Writing magic tar header
	I1212 20:19:59.008895   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Writing SSH key tar header
	I1212 20:19:59.008909   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:19:59.008838   30038 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02 ...
	I1212 20:19:59.008954   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02
	I1212 20:19:59.008987   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines
	I1212 20:19:59.008998   29681 main.go:141] libmachine: (multinode-562818-m02) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02 (perms=drwx------)
	I1212 20:19:59.009013   29681 main.go:141] libmachine: (multinode-562818-m02) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines (perms=drwxr-xr-x)
	I1212 20:19:59.009022   29681 main.go:141] libmachine: (multinode-562818-m02) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube (perms=drwxr-xr-x)
	I1212 20:19:59.009031   29681 main.go:141] libmachine: (multinode-562818-m02) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188 (perms=drwxrwxr-x)
	I1212 20:19:59.009040   29681 main.go:141] libmachine: (multinode-562818-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 20:19:59.009050   29681 main.go:141] libmachine: (multinode-562818-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 20:19:59.009066   29681 main.go:141] libmachine: (multinode-562818-m02) Creating domain...
	I1212 20:19:59.009080   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:19:59.009097   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188
	I1212 20:19:59.009105   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 20:19:59.009112   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home/jenkins
	I1212 20:19:59.009120   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Checking permissions on dir: /home
	I1212 20:19:59.009128   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Skipping /home - not owner
	I1212 20:19:59.009998   29681 main.go:141] libmachine: (multinode-562818-m02) define libvirt domain using xml: 
	I1212 20:19:59.010020   29681 main.go:141] libmachine: (multinode-562818-m02) <domain type='kvm'>
	I1212 20:19:59.010032   29681 main.go:141] libmachine: (multinode-562818-m02)   <name>multinode-562818-m02</name>
	I1212 20:19:59.010048   29681 main.go:141] libmachine: (multinode-562818-m02)   <memory unit='MiB'>2200</memory>
	I1212 20:19:59.010062   29681 main.go:141] libmachine: (multinode-562818-m02)   <vcpu>2</vcpu>
	I1212 20:19:59.010078   29681 main.go:141] libmachine: (multinode-562818-m02)   <features>
	I1212 20:19:59.010093   29681 main.go:141] libmachine: (multinode-562818-m02)     <acpi/>
	I1212 20:19:59.010106   29681 main.go:141] libmachine: (multinode-562818-m02)     <apic/>
	I1212 20:19:59.010120   29681 main.go:141] libmachine: (multinode-562818-m02)     <pae/>
	I1212 20:19:59.010132   29681 main.go:141] libmachine: (multinode-562818-m02)     
	I1212 20:19:59.010145   29681 main.go:141] libmachine: (multinode-562818-m02)   </features>
	I1212 20:19:59.010157   29681 main.go:141] libmachine: (multinode-562818-m02)   <cpu mode='host-passthrough'>
	I1212 20:19:59.010188   29681 main.go:141] libmachine: (multinode-562818-m02)   
	I1212 20:19:59.010211   29681 main.go:141] libmachine: (multinode-562818-m02)   </cpu>
	I1212 20:19:59.010234   29681 main.go:141] libmachine: (multinode-562818-m02)   <os>
	I1212 20:19:59.010243   29681 main.go:141] libmachine: (multinode-562818-m02)     <type>hvm</type>
	I1212 20:19:59.010250   29681 main.go:141] libmachine: (multinode-562818-m02)     <boot dev='cdrom'/>
	I1212 20:19:59.010262   29681 main.go:141] libmachine: (multinode-562818-m02)     <boot dev='hd'/>
	I1212 20:19:59.010272   29681 main.go:141] libmachine: (multinode-562818-m02)     <bootmenu enable='no'/>
	I1212 20:19:59.010280   29681 main.go:141] libmachine: (multinode-562818-m02)   </os>
	I1212 20:19:59.010289   29681 main.go:141] libmachine: (multinode-562818-m02)   <devices>
	I1212 20:19:59.010298   29681 main.go:141] libmachine: (multinode-562818-m02)     <disk type='file' device='cdrom'>
	I1212 20:19:59.010311   29681 main.go:141] libmachine: (multinode-562818-m02)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/boot2docker.iso'/>
	I1212 20:19:59.010319   29681 main.go:141] libmachine: (multinode-562818-m02)       <target dev='hdc' bus='scsi'/>
	I1212 20:19:59.010329   29681 main.go:141] libmachine: (multinode-562818-m02)       <readonly/>
	I1212 20:19:59.010337   29681 main.go:141] libmachine: (multinode-562818-m02)     </disk>
	I1212 20:19:59.010366   29681 main.go:141] libmachine: (multinode-562818-m02)     <disk type='file' device='disk'>
	I1212 20:19:59.010387   29681 main.go:141] libmachine: (multinode-562818-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 20:19:59.010398   29681 main.go:141] libmachine: (multinode-562818-m02)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/multinode-562818-m02.rawdisk'/>
	I1212 20:19:59.010406   29681 main.go:141] libmachine: (multinode-562818-m02)       <target dev='hda' bus='virtio'/>
	I1212 20:19:59.010416   29681 main.go:141] libmachine: (multinode-562818-m02)     </disk>
	I1212 20:19:59.010429   29681 main.go:141] libmachine: (multinode-562818-m02)     <interface type='network'>
	I1212 20:19:59.010443   29681 main.go:141] libmachine: (multinode-562818-m02)       <source network='mk-multinode-562818'/>
	I1212 20:19:59.010457   29681 main.go:141] libmachine: (multinode-562818-m02)       <model type='virtio'/>
	I1212 20:19:59.010465   29681 main.go:141] libmachine: (multinode-562818-m02)     </interface>
	I1212 20:19:59.010472   29681 main.go:141] libmachine: (multinode-562818-m02)     <interface type='network'>
	I1212 20:19:59.010480   29681 main.go:141] libmachine: (multinode-562818-m02)       <source network='default'/>
	I1212 20:19:59.010489   29681 main.go:141] libmachine: (multinode-562818-m02)       <model type='virtio'/>
	I1212 20:19:59.010494   29681 main.go:141] libmachine: (multinode-562818-m02)     </interface>
	I1212 20:19:59.010503   29681 main.go:141] libmachine: (multinode-562818-m02)     <serial type='pty'>
	I1212 20:19:59.010516   29681 main.go:141] libmachine: (multinode-562818-m02)       <target port='0'/>
	I1212 20:19:59.010529   29681 main.go:141] libmachine: (multinode-562818-m02)     </serial>
	I1212 20:19:59.010550   29681 main.go:141] libmachine: (multinode-562818-m02)     <console type='pty'>
	I1212 20:19:59.010582   29681 main.go:141] libmachine: (multinode-562818-m02)       <target type='serial' port='0'/>
	I1212 20:19:59.010598   29681 main.go:141] libmachine: (multinode-562818-m02)     </console>
	I1212 20:19:59.010613   29681 main.go:141] libmachine: (multinode-562818-m02)     <rng model='virtio'>
	I1212 20:19:59.010630   29681 main.go:141] libmachine: (multinode-562818-m02)       <backend model='random'>/dev/random</backend>
	I1212 20:19:59.010645   29681 main.go:141] libmachine: (multinode-562818-m02)     </rng>
	I1212 20:19:59.010661   29681 main.go:141] libmachine: (multinode-562818-m02)     
	I1212 20:19:59.010674   29681 main.go:141] libmachine: (multinode-562818-m02)     
	I1212 20:19:59.010690   29681 main.go:141] libmachine: (multinode-562818-m02)   </devices>
	I1212 20:19:59.010703   29681 main.go:141] libmachine: (multinode-562818-m02) </domain>
	I1212 20:19:59.010725   29681 main.go:141] libmachine: (multinode-562818-m02) 
	I1212 20:19:59.017523   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:cf:00:5d in network default
	I1212 20:19:59.018095   29681 main.go:141] libmachine: (multinode-562818-m02) Ensuring networks are active...
	I1212 20:19:59.018127   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:19:59.018712   29681 main.go:141] libmachine: (multinode-562818-m02) Ensuring network default is active
	I1212 20:19:59.018943   29681 main.go:141] libmachine: (multinode-562818-m02) Ensuring network mk-multinode-562818 is active
	I1212 20:19:59.019279   29681 main.go:141] libmachine: (multinode-562818-m02) Getting domain xml...
	I1212 20:19:59.019963   29681 main.go:141] libmachine: (multinode-562818-m02) Creating domain...
	I1212 20:20:00.252443   29681 main.go:141] libmachine: (multinode-562818-m02) Waiting to get IP...
	I1212 20:20:00.253339   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:00.253709   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:00.253737   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:00.253678   30038 retry.go:31] will retry after 264.479006ms: waiting for machine to come up
	I1212 20:20:00.520191   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:00.520600   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:00.520625   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:00.520557   30038 retry.go:31] will retry after 278.343177ms: waiting for machine to come up
	I1212 20:20:00.800143   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:00.800499   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:00.800531   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:00.800459   30038 retry.go:31] will retry after 455.457635ms: waiting for machine to come up
	I1212 20:20:01.257757   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:01.258173   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:01.258203   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:01.258112   30038 retry.go:31] will retry after 597.764384ms: waiting for machine to come up
	I1212 20:20:01.857860   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:01.858327   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:01.858357   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:01.858279   30038 retry.go:31] will retry after 704.281835ms: waiting for machine to come up
	I1212 20:20:02.564135   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:02.564647   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:02.564677   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:02.564587   30038 retry.go:31] will retry after 664.675039ms: waiting for machine to come up
	I1212 20:20:03.230522   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:03.230929   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:03.230958   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:03.230901   30038 retry.go:31] will retry after 957.819866ms: waiting for machine to come up
	I1212 20:20:04.189943   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:04.190418   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:04.190449   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:04.190361   30038 retry.go:31] will retry after 1.213134802s: waiting for machine to come up
	I1212 20:20:05.405726   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:05.406187   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:05.406218   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:05.406123   30038 retry.go:31] will retry after 1.30389245s: waiting for machine to come up
	I1212 20:20:06.711603   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:06.712017   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:06.712046   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:06.711968   30038 retry.go:31] will retry after 1.917590116s: waiting for machine to come up
	I1212 20:20:08.632106   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:08.632521   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:08.632548   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:08.632476   30038 retry.go:31] will retry after 2.648564985s: waiting for machine to come up
	I1212 20:20:11.284498   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:11.284958   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:11.284991   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:11.284916   30038 retry.go:31] will retry after 3.371005832s: waiting for machine to come up
	I1212 20:20:14.657857   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:14.658286   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:14.658314   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:14.658240   30038 retry.go:31] will retry after 3.249390227s: waiting for machine to come up
	I1212 20:20:17.911573   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:17.912069   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find current IP address of domain multinode-562818-m02 in network mk-multinode-562818
	I1212 20:20:17.912097   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | I1212 20:20:17.912008   30038 retry.go:31] will retry after 4.446619524s: waiting for machine to come up
	I1212 20:20:22.363135   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.363578   29681 main.go:141] libmachine: (multinode-562818-m02) Found IP for machine: 192.168.39.65
	I1212 20:20:22.363598   29681 main.go:141] libmachine: (multinode-562818-m02) Reserving static IP address...
	I1212 20:20:22.363614   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.364090   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | unable to find host DHCP lease matching {name: "multinode-562818-m02", mac: "52:54:00:33:1b:cb", ip: "192.168.39.65"} in network mk-multinode-562818
	I1212 20:20:22.437577   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Getting to WaitForSSH function...
	I1212 20:20:22.437643   29681 main.go:141] libmachine: (multinode-562818-m02) Reserved static IP address: 192.168.39.65
	I1212 20:20:22.437663   29681 main.go:141] libmachine: (multinode-562818-m02) Waiting for SSH to be available...
	I1212 20:20:22.440558   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.440982   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:22.441029   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.441118   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Using SSH client type: external
	I1212 20:20:22.441158   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa (-rw-------)
	I1212 20:20:22.441183   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 20:20:22.441202   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | About to run SSH command:
	I1212 20:20:22.441219   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | exit 0
	I1212 20:20:22.535419   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 20:20:22.535665   29681 main.go:141] libmachine: (multinode-562818-m02) KVM machine creation complete!
	I1212 20:20:22.535978   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetConfigRaw
	I1212 20:20:22.536511   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:22.536696   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:22.536861   29681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 20:20:22.536882   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetState
	I1212 20:20:22.538219   29681 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 20:20:22.538237   29681 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 20:20:22.538244   29681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 20:20:22.538257   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:22.540631   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.541020   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:22.541050   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.541252   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:22.541436   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.541602   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.541762   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:22.541973   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:20:22.542418   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:20:22.542432   29681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 20:20:22.662684   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:20:22.662707   29681 main.go:141] libmachine: Detecting the provisioner...
	I1212 20:20:22.662714   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:22.665661   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.665983   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:22.666018   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.666205   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:22.666381   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.666509   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.666633   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:22.666768   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:20:22.667079   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:20:22.667091   29681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 20:20:22.791983   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 20:20:22.792048   29681 main.go:141] libmachine: found compatible host: buildroot
	I1212 20:20:22.792063   29681 main.go:141] libmachine: Provisioning with buildroot...
	I1212 20:20:22.792079   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:20:22.792340   29681 buildroot.go:166] provisioning hostname "multinode-562818-m02"
	I1212 20:20:22.792375   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:20:22.792557   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:22.795421   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.795797   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:22.795824   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.795951   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:22.796140   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.796318   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.796461   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:22.796599   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:20:22.796924   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:20:22.796940   29681 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-562818-m02 && echo "multinode-562818-m02" | sudo tee /etc/hostname
	I1212 20:20:22.931785   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-562818-m02
	
	I1212 20:20:22.931814   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:22.934807   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.935195   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:22.935253   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:22.935430   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:22.935613   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.935774   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:22.935893   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:22.936078   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:20:22.936391   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:20:22.936408   29681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-562818-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-562818-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-562818-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:20:23.067761   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:20:23.067801   29681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:20:23.067823   29681 buildroot.go:174] setting up certificates
	I1212 20:20:23.067834   29681 provision.go:83] configureAuth start
	I1212 20:20:23.067847   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:20:23.068119   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:20:23.070711   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.071123   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.071153   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.071310   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:23.073334   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.073736   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.073765   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.073882   29681 provision.go:138] copyHostCerts
	I1212 20:20:23.073921   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:20:23.073954   29681 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:20:23.073963   29681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:20:23.074023   29681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:20:23.074090   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:20:23.074106   29681 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:20:23.074114   29681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:20:23.074137   29681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:20:23.074177   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:20:23.074192   29681 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:20:23.074198   29681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:20:23.074217   29681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:20:23.074257   29681 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.multinode-562818-m02 san=[192.168.39.65 192.168.39.65 localhost 127.0.0.1 minikube multinode-562818-m02]
	I1212 20:20:23.160045   29681 provision.go:172] copyRemoteCerts
	I1212 20:20:23.160102   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:20:23.160123   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:23.163715   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.164153   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.164184   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.164368   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:23.164514   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.164692   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:23.164793   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:20:23.259000   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:20:23.259066   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:20:23.286995   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:20:23.287061   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 20:20:23.313383   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:20:23.313449   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:20:23.340422   29681 provision.go:86] duration metric: configureAuth took 272.577693ms
	I1212 20:20:23.340449   29681 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:20:23.340644   29681 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:20:23.340726   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:23.343254   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.343537   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.343565   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.343799   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:23.343978   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.344102   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.344210   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:23.344379   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:20:23.344674   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:20:23.344689   29681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:20:23.669456   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:20:23.669488   29681 main.go:141] libmachine: Checking connection to Docker...
	I1212 20:20:23.669496   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetURL
	I1212 20:20:23.670756   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | Using libvirt version 6000000
	I1212 20:20:23.673139   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.673493   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.673530   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.673706   29681 main.go:141] libmachine: Docker is up and running!
	I1212 20:20:23.673724   29681 main.go:141] libmachine: Reticulating splines...
	I1212 20:20:23.673732   29681 client.go:171] LocalClient.Create took 24.944056096s
	I1212 20:20:23.673753   29681 start.go:167] duration metric: libmachine.API.Create for "multinode-562818" took 24.944098788s
	I1212 20:20:23.673763   29681 start.go:300] post-start starting for "multinode-562818-m02" (driver="kvm2")
	I1212 20:20:23.673771   29681 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:20:23.673787   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:23.674042   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:20:23.674068   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:23.676398   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.676718   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.676751   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.676902   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:23.677092   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.677218   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:23.677359   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:20:23.769066   29681 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:20:23.773174   29681 command_runner.go:130] > NAME=Buildroot
	I1212 20:20:23.773200   29681 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 20:20:23.773206   29681 command_runner.go:130] > ID=buildroot
	I1212 20:20:23.773213   29681 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 20:20:23.773220   29681 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 20:20:23.773257   29681 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:20:23.773274   29681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:20:23.773341   29681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:20:23.773413   29681 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:20:23.773423   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /etc/ssl/certs/164562.pem
	I1212 20:20:23.773498   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:20:23.782335   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:20:23.805410   29681 start.go:303] post-start completed in 131.63402ms
	I1212 20:20:23.805464   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetConfigRaw
	I1212 20:20:23.806032   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:20:23.808620   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.808944   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.808972   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.809209   29681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:20:23.809386   29681 start.go:128] duration metric: createHost completed in 25.097406515s
	I1212 20:20:23.809407   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:23.811880   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.812251   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.812284   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.812376   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:23.812559   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.812741   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.812894   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:23.813043   29681 main.go:141] libmachine: Using SSH client type: native
	I1212 20:20:23.813381   29681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:20:23.813392   29681 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:20:23.936125   29681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412423.918876590
	
	I1212 20:20:23.936150   29681 fix.go:206] guest clock: 1702412423.918876590
	I1212 20:20:23.936159   29681 fix.go:219] Guest: 2023-12-12 20:20:23.91887659 +0000 UTC Remote: 2023-12-12 20:20:23.809396998 +0000 UTC m=+92.968633176 (delta=109.479592ms)
	I1212 20:20:23.936178   29681 fix.go:190] guest clock delta is within tolerance: 109.479592ms
	I1212 20:20:23.936183   29681 start.go:83] releasing machines lock for "multinode-562818-m02", held for 25.224320104s
	I1212 20:20:23.936199   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:23.936484   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:20:23.939476   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.939850   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.939881   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.942262   29681 out.go:177] * Found network options:
	I1212 20:20:23.943629   29681 out.go:177]   - NO_PROXY=192.168.39.77
	W1212 20:20:23.944924   29681 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 20:20:23.944958   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:23.945531   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:23.945729   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:20:23.945821   29681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:20:23.945862   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	W1212 20:20:23.945952   29681 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 20:20:23.946053   29681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:20:23.946078   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:20:23.948759   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.948790   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.949105   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.949135   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.949164   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:23.949194   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:23.949244   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:23.949487   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.949488   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:20:23.949689   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:20:23.949698   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:23.949864   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:20:23.949868   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:20:23.950008   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:20:24.193225   29681 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:20:24.193324   29681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:20:24.199179   29681 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:20:24.199389   29681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:20:24.199510   29681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:20:24.214019   29681 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 20:20:24.214194   29681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:20:24.214212   29681 start.go:475] detecting cgroup driver to use...
	I1212 20:20:24.214279   29681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:20:24.230648   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:20:24.245668   29681 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:20:24.245729   29681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:20:24.260462   29681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:20:24.275361   29681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:20:24.289931   29681 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 20:20:24.394429   29681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:20:24.407858   29681 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 20:20:24.527745   29681 docker.go:219] disabling docker service ...
	I1212 20:20:24.527820   29681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:20:24.543836   29681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:20:24.555911   29681 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 20:20:24.556292   29681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:20:24.570454   29681 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 20:20:24.670615   29681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:20:24.784293   29681 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 20:20:24.784321   29681 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 20:20:24.784380   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:20:24.797447   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:20:24.814808   29681 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:20:24.815125   29681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 20:20:24.815200   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:24.824725   29681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:20:24.824784   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:24.834642   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:24.845297   29681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:24.855303   29681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:20:24.866114   29681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:20:24.875233   29681 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:20:24.875312   29681 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:20:24.875366   29681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 20:20:24.889215   29681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:20:24.898752   29681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:20:25.025971   29681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:20:25.208027   29681 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:20:25.208106   29681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:20:25.213793   29681 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:20:25.213819   29681 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:20:25.213830   29681 command_runner.go:130] > Device: 16h/22d	Inode: 789         Links: 1
	I1212 20:20:25.213841   29681 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:20:25.213848   29681 command_runner.go:130] > Access: 2023-12-12 20:20:25.176155400 +0000
	I1212 20:20:25.213857   29681 command_runner.go:130] > Modify: 2023-12-12 20:20:25.176155400 +0000
	I1212 20:20:25.213866   29681 command_runner.go:130] > Change: 2023-12-12 20:20:25.176155400 +0000
	I1212 20:20:25.213876   29681 command_runner.go:130] >  Birth: -
	I1212 20:20:25.213901   29681 start.go:543] Will wait 60s for crictl version
	I1212 20:20:25.213955   29681 ssh_runner.go:195] Run: which crictl
	I1212 20:20:25.217762   29681 command_runner.go:130] > /usr/bin/crictl
	I1212 20:20:25.217811   29681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:20:25.258215   29681 command_runner.go:130] > Version:  0.1.0
	I1212 20:20:25.258235   29681 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:20:25.258240   29681 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 20:20:25.258245   29681 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:20:25.258259   29681 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:20:25.258310   29681 ssh_runner.go:195] Run: crio --version
	I1212 20:20:25.313446   29681 command_runner.go:130] > crio version 1.24.1
	I1212 20:20:25.313483   29681 command_runner.go:130] > Version:          1.24.1
	I1212 20:20:25.313495   29681 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:20:25.313504   29681 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:20:25.313513   29681 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:20:25.313527   29681 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:20:25.313535   29681 command_runner.go:130] > Compiler:         gc
	I1212 20:20:25.313544   29681 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:20:25.313557   29681 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:20:25.313573   29681 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:20:25.313585   29681 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:20:25.313597   29681 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:20:25.313692   29681 ssh_runner.go:195] Run: crio --version
	I1212 20:20:25.365240   29681 command_runner.go:130] > crio version 1.24.1
	I1212 20:20:25.365267   29681 command_runner.go:130] > Version:          1.24.1
	I1212 20:20:25.365279   29681 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:20:25.365285   29681 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:20:25.365293   29681 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:20:25.365301   29681 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:20:25.365307   29681 command_runner.go:130] > Compiler:         gc
	I1212 20:20:25.365314   29681 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:20:25.365323   29681 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:20:25.365336   29681 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:20:25.365348   29681 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:20:25.365358   29681 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:20:25.367560   29681 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 20:20:25.369199   29681 out.go:177]   - env NO_PROXY=192.168.39.77
	I1212 20:20:25.370597   29681 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:20:25.373443   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:25.373745   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:20:25.373769   29681 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:20:25.373967   29681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:20:25.378256   29681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:20:25.390251   29681 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818 for IP: 192.168.39.65
	I1212 20:20:25.390277   29681 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:25.390414   29681 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:20:25.390453   29681 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:20:25.390466   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:20:25.390480   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:20:25.390497   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:20:25.390516   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:20:25.390585   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:20:25.390630   29681 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:20:25.390645   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:20:25.390672   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:20:25.390695   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:20:25.390716   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:20:25.390765   29681 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:20:25.390790   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /usr/share/ca-certificates/164562.pem
	I1212 20:20:25.390803   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:25.390815   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem -> /usr/share/ca-certificates/16456.pem
	I1212 20:20:25.391177   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:20:25.415437   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:20:25.439110   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:20:25.463260   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:20:25.487410   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:20:25.511279   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:20:25.535618   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:20:25.557857   29681 ssh_runner.go:195] Run: openssl version
	I1212 20:20:25.563154   29681 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 20:20:25.563228   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:20:25.573610   29681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:25.577901   29681 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:25.577974   29681 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:25.578018   29681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:25.583232   29681 command_runner.go:130] > b5213941
	I1212 20:20:25.583311   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:20:25.593553   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:20:25.604084   29681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:20:25.608637   29681 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:20:25.608726   29681 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:20:25.608791   29681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:20:25.614259   29681 command_runner.go:130] > 51391683
	I1212 20:20:25.614591   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:20:25.625588   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:20:25.635932   29681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:20:25.640662   29681 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:20:25.640698   29681 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:20:25.640745   29681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:20:25.646287   29681 command_runner.go:130] > 3ec20f2e
	I1212 20:20:25.646640   29681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:20:25.657973   29681 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:20:25.662038   29681 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:20:25.662204   29681 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:20:25.662326   29681 ssh_runner.go:195] Run: crio config
	I1212 20:20:25.719645   29681 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:20:25.719678   29681 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:20:25.719689   29681 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:20:25.719695   29681 command_runner.go:130] > #
	I1212 20:20:25.719706   29681 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:20:25.719716   29681 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:20:25.719726   29681 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:20:25.719737   29681 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:20:25.719750   29681 command_runner.go:130] > # reload'.
	I1212 20:20:25.719759   29681 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:20:25.719770   29681 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:20:25.719783   29681 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:20:25.719793   29681 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:20:25.719803   29681 command_runner.go:130] > [crio]
	I1212 20:20:25.719812   29681 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:20:25.719829   29681 command_runner.go:130] > # containers images, in this directory.
	I1212 20:20:25.719856   29681 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 20:20:25.719873   29681 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:20:25.720190   29681 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 20:20:25.720212   29681 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:20:25.720222   29681 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:20:25.720272   29681 command_runner.go:130] > storage_driver = "overlay"
	I1212 20:20:25.720288   29681 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:20:25.720301   29681 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:20:25.720311   29681 command_runner.go:130] > storage_option = [
	I1212 20:20:25.720564   29681 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 20:20:25.720599   29681 command_runner.go:130] > ]
	I1212 20:20:25.720614   29681 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:20:25.720626   29681 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:20:25.720992   29681 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:20:25.721009   29681 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:20:25.721020   29681 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:20:25.721028   29681 command_runner.go:130] > # always happen on a node reboot
	I1212 20:20:25.721568   29681 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:20:25.721588   29681 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:20:25.721599   29681 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:20:25.721613   29681 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:20:25.722020   29681 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 20:20:25.722042   29681 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:20:25.722056   29681 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:20:25.722622   29681 command_runner.go:130] > # internal_wipe = true
	I1212 20:20:25.722640   29681 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:20:25.722650   29681 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:20:25.722660   29681 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:20:25.723171   29681 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:20:25.723192   29681 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:20:25.723199   29681 command_runner.go:130] > [crio.api]
	I1212 20:20:25.723207   29681 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:20:25.723698   29681 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:20:25.723714   29681 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:20:25.724250   29681 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:20:25.724269   29681 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:20:25.724281   29681 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:20:25.724683   29681 command_runner.go:130] > # stream_port = "0"
	I1212 20:20:25.724697   29681 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:20:25.726478   29681 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:20:25.726496   29681 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:20:25.726504   29681 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:20:25.726515   29681 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:20:25.726530   29681 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 20:20:25.726540   29681 command_runner.go:130] > # minutes.
	I1212 20:20:25.726551   29681 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:20:25.726564   29681 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:20:25.726579   29681 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 20:20:25.726589   29681 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:20:25.726601   29681 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:20:25.726616   29681 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:20:25.726628   29681 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 20:20:25.726636   29681 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:20:25.726652   29681 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:20:25.726664   29681 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 20:20:25.726680   29681 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:20:25.726691   29681 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 20:20:25.726714   29681 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:20:25.726726   29681 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:20:25.726734   29681 command_runner.go:130] > [crio.runtime]
	I1212 20:20:25.726747   29681 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:20:25.726761   29681 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:20:25.726771   29681 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:20:25.726790   29681 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:20:25.726801   29681 command_runner.go:130] > # default_ulimits = [
	I1212 20:20:25.726811   29681 command_runner.go:130] > # ]
	I1212 20:20:25.726824   29681 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:20:25.726834   29681 command_runner.go:130] > # no_pivot = false
	I1212 20:20:25.726845   29681 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:20:25.726859   29681 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:20:25.726871   29681 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:20:25.726884   29681 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:20:25.726896   29681 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:20:25.726911   29681 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:20:25.726923   29681 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 20:20:25.726934   29681 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:20:25.726947   29681 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:20:25.726958   29681 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:20:25.726972   29681 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:20:25.726985   29681 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:20:25.727000   29681 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:20:25.727010   29681 command_runner.go:130] > conmon_env = [
	I1212 20:20:25.727023   29681 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 20:20:25.727032   29681 command_runner.go:130] > ]
	I1212 20:20:25.727042   29681 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:20:25.727055   29681 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:20:25.727068   29681 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:20:25.727078   29681 command_runner.go:130] > # default_env = [
	I1212 20:20:25.727088   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727098   29681 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:20:25.727108   29681 command_runner.go:130] > # selinux = false
	I1212 20:20:25.727121   29681 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:20:25.727135   29681 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 20:20:25.727148   29681 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 20:20:25.727159   29681 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:20:25.727172   29681 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 20:20:25.727186   29681 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 20:20:25.727217   29681 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 20:20:25.727228   29681 command_runner.go:130] > # which might increase security.
	I1212 20:20:25.727246   29681 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 20:20:25.727258   29681 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:20:25.727272   29681 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:20:25.727286   29681 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:20:25.727301   29681 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:20:25.727313   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:20:25.727325   29681 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:20:25.727338   29681 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:20:25.727348   29681 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:20:25.727357   29681 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:20:25.727371   29681 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:20:25.727382   29681 command_runner.go:130] > # irqbalance daemon.
	I1212 20:20:25.727396   29681 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:20:25.727411   29681 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:20:25.727423   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:20:25.727434   29681 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:20:25.727446   29681 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:20:25.727454   29681 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:20:25.727469   29681 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:20:25.727479   29681 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:20:25.727494   29681 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:20:25.727508   29681 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:20:25.727518   29681 command_runner.go:130] > # will be added.
	I1212 20:20:25.727527   29681 command_runner.go:130] > # default_capabilities = [
	I1212 20:20:25.727536   29681 command_runner.go:130] > # 	"CHOWN",
	I1212 20:20:25.727544   29681 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:20:25.727554   29681 command_runner.go:130] > # 	"FSETID",
	I1212 20:20:25.727562   29681 command_runner.go:130] > # 	"FOWNER",
	I1212 20:20:25.727572   29681 command_runner.go:130] > # 	"SETGID",
	I1212 20:20:25.727582   29681 command_runner.go:130] > # 	"SETUID",
	I1212 20:20:25.727592   29681 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:20:25.727602   29681 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:20:25.727610   29681 command_runner.go:130] > # 	"KILL",
	I1212 20:20:25.727617   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727629   29681 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:20:25.727642   29681 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:20:25.727653   29681 command_runner.go:130] > # default_sysctls = [
	I1212 20:20:25.727661   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727670   29681 command_runner.go:130] > # List of devices on the host that a
	I1212 20:20:25.727684   29681 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:20:25.727695   29681 command_runner.go:130] > # allowed_devices = [
	I1212 20:20:25.727703   29681 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:20:25.727709   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727721   29681 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:20:25.727735   29681 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:20:25.727747   29681 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:20:25.727775   29681 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:20:25.727790   29681 command_runner.go:130] > # additional_devices = [
	I1212 20:20:25.727799   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727810   29681 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:20:25.727820   29681 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:20:25.727831   29681 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:20:25.727841   29681 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:20:25.727848   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727862   29681 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:20:25.727876   29681 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:20:25.727885   29681 command_runner.go:130] > # Defaults to false.
	I1212 20:20:25.727895   29681 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:20:25.727909   29681 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:20:25.727923   29681 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:20:25.727933   29681 command_runner.go:130] > # hooks_dir = [
	I1212 20:20:25.727945   29681 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:20:25.727952   29681 command_runner.go:130] > # ]
	I1212 20:20:25.727964   29681 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:20:25.727978   29681 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:20:25.727993   29681 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:20:25.728002   29681 command_runner.go:130] > #
	I1212 20:20:25.728014   29681 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:20:25.728028   29681 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:20:25.728041   29681 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:20:25.728049   29681 command_runner.go:130] > #
	I1212 20:20:25.728060   29681 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:20:25.728075   29681 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:20:25.728090   29681 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:20:25.728102   29681 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:20:25.728111   29681 command_runner.go:130] > #
	I1212 20:20:25.728120   29681 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:20:25.728132   29681 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:20:25.728147   29681 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:20:25.728157   29681 command_runner.go:130] > pids_limit = 1024
	I1212 20:20:25.728171   29681 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:20:25.728184   29681 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:20:25.728197   29681 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:20:25.728215   29681 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:20:25.728226   29681 command_runner.go:130] > # log_size_max = -1
	I1212 20:20:25.728241   29681 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 20:20:25.728252   29681 command_runner.go:130] > # log_to_journald = false
	I1212 20:20:25.728268   29681 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:20:25.728279   29681 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:20:25.728288   29681 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:20:25.728298   29681 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:20:25.728312   29681 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:20:25.728323   29681 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:20:25.728337   29681 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:20:25.728347   29681 command_runner.go:130] > # read_only = false
	I1212 20:20:25.728361   29681 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:20:25.728375   29681 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:20:25.728385   29681 command_runner.go:130] > # live configuration reload.
	I1212 20:20:25.728393   29681 command_runner.go:130] > # log_level = "info"
	I1212 20:20:25.728407   29681 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:20:25.728419   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:20:25.728429   29681 command_runner.go:130] > # log_filter = ""
	I1212 20:20:25.728443   29681 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:20:25.728457   29681 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:20:25.728468   29681 command_runner.go:130] > # separated by comma.
	I1212 20:20:25.728477   29681 command_runner.go:130] > # uid_mappings = ""
	I1212 20:20:25.728489   29681 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:20:25.728502   29681 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:20:25.728513   29681 command_runner.go:130] > # separated by comma.
	I1212 20:20:25.728521   29681 command_runner.go:130] > # gid_mappings = ""
	I1212 20:20:25.728535   29681 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:20:25.728549   29681 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:20:25.728563   29681 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:20:25.728573   29681 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:20:25.728584   29681 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:20:25.728598   29681 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:20:25.728612   29681 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:20:25.728622   29681 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:20:25.728634   29681 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:20:25.728647   29681 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:20:25.728662   29681 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:20:25.728672   29681 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:20:25.728686   29681 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:20:25.728699   29681 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:20:25.728711   29681 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:20:25.728723   29681 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:20:25.728733   29681 command_runner.go:130] > drop_infra_ctr = false
	I1212 20:20:25.728745   29681 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:20:25.728758   29681 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:20:25.728773   29681 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:20:25.728789   29681 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:20:25.728803   29681 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:20:25.728815   29681 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:20:25.728825   29681 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:20:25.728838   29681 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:20:25.728849   29681 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 20:20:25.728863   29681 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:20:25.728878   29681 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 20:20:25.728892   29681 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 20:20:25.728903   29681 command_runner.go:130] > # default_runtime = "runc"
	I1212 20:20:25.728913   29681 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:20:25.728928   29681 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:20:25.728946   29681 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 20:20:25.728958   29681 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:20:25.728977   29681 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:20:25.728989   29681 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:20:25.729001   29681 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:20:25.729010   29681 command_runner.go:130] > # ]
	I1212 20:20:25.729021   29681 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:20:25.729035   29681 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:20:25.729050   29681 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 20:20:25.729064   29681 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 20:20:25.729073   29681 command_runner.go:130] > #
	I1212 20:20:25.729082   29681 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 20:20:25.729094   29681 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 20:20:25.729104   29681 command_runner.go:130] > #  runtime_type = "oci"
	I1212 20:20:25.729113   29681 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 20:20:25.729125   29681 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 20:20:25.729136   29681 command_runner.go:130] > #  allowed_annotations = []
	I1212 20:20:25.729145   29681 command_runner.go:130] > # Where:
	I1212 20:20:25.729155   29681 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 20:20:25.729170   29681 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 20:20:25.729186   29681 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:20:25.729200   29681 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:20:25.729209   29681 command_runner.go:130] > #   in $PATH.
	I1212 20:20:25.729222   29681 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 20:20:25.729233   29681 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:20:25.729247   29681 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 20:20:25.729257   29681 command_runner.go:130] > #   state.
	I1212 20:20:25.729270   29681 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:20:25.729282   29681 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:20:25.729294   29681 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:20:25.729307   29681 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:20:25.729320   29681 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:20:25.729335   29681 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:20:25.729346   29681 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:20:25.729361   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:20:25.729376   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:20:25.729390   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:20:25.729404   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:20:25.729420   29681 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:20:25.729434   29681 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:20:25.729448   29681 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:20:25.729462   29681 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 20:20:25.729474   29681 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:20:25.729486   29681 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:20:25.729497   29681 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 20:20:25.729508   29681 command_runner.go:130] > runtime_type = "oci"
	I1212 20:20:25.729517   29681 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:20:25.729530   29681 command_runner.go:130] > runtime_config_path = ""
	I1212 20:20:25.729540   29681 command_runner.go:130] > monitor_path = ""
	I1212 20:20:25.729551   29681 command_runner.go:130] > monitor_cgroup = ""
	I1212 20:20:25.729560   29681 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:20:25.729574   29681 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 20:20:25.729585   29681 command_runner.go:130] > # running containers
	I1212 20:20:25.729594   29681 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 20:20:25.729608   29681 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 20:20:25.729639   29681 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 20:20:25.729652   29681 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 20:20:25.729662   29681 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 20:20:25.729672   29681 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 20:20:25.729680   29681 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 20:20:25.729689   29681 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 20:20:25.729701   29681 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 20:20:25.729709   29681 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 20:20:25.729721   29681 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:20:25.729733   29681 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:20:25.729747   29681 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:20:25.729763   29681 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:20:25.729784   29681 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 20:20:25.729797   29681 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:20:25.729816   29681 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:20:25.729832   29681 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:20:25.729846   29681 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:20:25.729861   29681 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:20:25.729871   29681 command_runner.go:130] > # Example:
	I1212 20:20:25.729883   29681 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:20:25.729896   29681 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:20:25.729907   29681 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:20:25.729919   29681 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:20:25.729927   29681 command_runner.go:130] > # cpuset = 0
	I1212 20:20:25.729937   29681 command_runner.go:130] > # cpushares = "0-1"
	I1212 20:20:25.729946   29681 command_runner.go:130] > # Where:
	I1212 20:20:25.729956   29681 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:20:25.729971   29681 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:20:25.729984   29681 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:20:25.729997   29681 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:20:25.730014   29681 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:20:25.730028   29681 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:20:25.730038   29681 command_runner.go:130] > # 
	I1212 20:20:25.730053   29681 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:20:25.730062   29681 command_runner.go:130] > #
	I1212 20:20:25.730073   29681 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:20:25.730087   29681 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 20:20:25.730101   29681 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 20:20:25.730113   29681 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 20:20:25.730122   29681 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 20:20:25.730129   29681 command_runner.go:130] > [crio.image]
	I1212 20:20:25.730145   29681 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:20:25.730157   29681 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:20:25.730168   29681 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:20:25.730183   29681 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:20:25.730193   29681 command_runner.go:130] > # global_auth_file = ""
	I1212 20:20:25.730204   29681 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:20:25.730216   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:20:25.730227   29681 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 20:20:25.730242   29681 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:20:25.730256   29681 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:20:25.730265   29681 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:20:25.730276   29681 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:20:25.730287   29681 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:20:25.730301   29681 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:20:25.730315   29681 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:20:25.730329   29681 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:20:25.730340   29681 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:20:25.730351   29681 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:20:25.730365   29681 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:20:25.730380   29681 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:20:25.730394   29681 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:20:25.730406   29681 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:20:25.730416   29681 command_runner.go:130] > # signature_policy = ""
	I1212 20:20:25.730428   29681 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:20:25.730440   29681 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:20:25.730451   29681 command_runner.go:130] > # changing them here.
	I1212 20:20:25.730463   29681 command_runner.go:130] > # insecure_registries = [
	I1212 20:20:25.730472   29681 command_runner.go:130] > # ]
	I1212 20:20:25.730484   29681 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:20:25.730496   29681 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:20:25.730506   29681 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:20:25.730518   29681 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:20:25.730526   29681 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:20:25.730540   29681 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:20:25.730550   29681 command_runner.go:130] > # CNI plugins.
	I1212 20:20:25.730560   29681 command_runner.go:130] > [crio.network]
	I1212 20:20:25.730572   29681 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:20:25.730586   29681 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:20:25.730596   29681 command_runner.go:130] > # cni_default_network = ""
	I1212 20:20:25.730608   29681 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:20:25.730619   29681 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:20:25.730629   29681 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:20:25.730637   29681 command_runner.go:130] > # plugin_dirs = [
	I1212 20:20:25.730647   29681 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:20:25.730656   29681 command_runner.go:130] > # ]
	I1212 20:20:25.730667   29681 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:20:25.730678   29681 command_runner.go:130] > [crio.metrics]
	I1212 20:20:25.730690   29681 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:20:25.730700   29681 command_runner.go:130] > enable_metrics = true
	I1212 20:20:25.730712   29681 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:20:25.730721   29681 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:20:25.730736   29681 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:20:25.730750   29681 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:20:25.730764   29681 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:20:25.730774   29681 command_runner.go:130] > # metrics_collectors = [
	I1212 20:20:25.730788   29681 command_runner.go:130] > # 	"operations",
	I1212 20:20:25.730797   29681 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 20:20:25.730806   29681 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 20:20:25.730817   29681 command_runner.go:130] > # 	"operations_errors",
	I1212 20:20:25.730826   29681 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 20:20:25.730837   29681 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 20:20:25.730846   29681 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 20:20:25.730858   29681 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 20:20:25.730869   29681 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 20:20:25.730879   29681 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:20:25.730886   29681 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 20:20:25.730897   29681 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:20:25.730905   29681 command_runner.go:130] > # 	"containers_oom",
	I1212 20:20:25.730916   29681 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:20:25.730924   29681 command_runner.go:130] > # 	"operations_total",
	I1212 20:20:25.730935   29681 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:20:25.730944   29681 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:20:25.730955   29681 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:20:25.730963   29681 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:20:25.730972   29681 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:20:25.730984   29681 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:20:25.730992   29681 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:20:25.731003   29681 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:20:25.731013   29681 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:20:25.731022   29681 command_runner.go:130] > # ]
	I1212 20:20:25.731032   29681 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:20:25.731041   29681 command_runner.go:130] > # metrics_port = 9090
	I1212 20:20:25.731051   29681 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:20:25.731063   29681 command_runner.go:130] > # metrics_socket = ""
	I1212 20:20:25.731081   29681 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:20:25.731094   29681 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:20:25.731109   29681 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:20:25.731121   29681 command_runner.go:130] > # certificate on any modification event.
	I1212 20:20:25.731130   29681 command_runner.go:130] > # metrics_cert = ""
	I1212 20:20:25.731140   29681 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:20:25.731152   29681 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:20:25.731162   29681 command_runner.go:130] > # metrics_key = ""
	I1212 20:20:25.731176   29681 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:20:25.731186   29681 command_runner.go:130] > [crio.tracing]
	I1212 20:20:25.731197   29681 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:20:25.731207   29681 command_runner.go:130] > # enable_tracing = false
	I1212 20:20:25.731220   29681 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:20:25.731229   29681 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 20:20:25.731256   29681 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 20:20:25.731268   29681 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:20:25.731280   29681 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:20:25.731290   29681 command_runner.go:130] > [crio.stats]
	I1212 20:20:25.731301   29681 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:20:25.731314   29681 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:20:25.731322   29681 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:20:25.731361   29681 command_runner.go:130] ! time="2023-12-12 20:20:25.703518703Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 20:20:25.731382   29681 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:20:25.731450   29681 cni.go:84] Creating CNI manager for ""
	I1212 20:20:25.731461   29681 cni.go:136] 2 nodes found, recommending kindnet
	I1212 20:20:25.731473   29681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:20:25.731499   29681 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-562818 NodeName:multinode-562818-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:20:25.731635   29681 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-562818-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:20:25.731699   29681 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-562818-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:20:25.731763   29681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 20:20:25.742003   29681 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1212 20:20:25.742043   29681 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1212 20:20:25.742093   29681 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1212 20:20:25.752158   29681 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1212 20:20:25.752194   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1212 20:20:25.752231   29681 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1212 20:20:25.752268   29681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1212 20:20:25.752269   29681 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1212 20:20:25.759335   29681 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1212 20:20:25.759395   29681 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1212 20:20:25.759420   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1212 20:20:26.819485   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:20:26.833940   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1212 20:20:26.834053   29681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1212 20:20:26.838500   29681 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1212 20:20:26.838547   29681 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1212 20:20:26.838570   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1212 20:20:29.643561   29681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1212 20:20:29.643639   29681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1212 20:20:29.648400   29681 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1212 20:20:29.648438   29681 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1212 20:20:29.648468   29681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1212 20:20:29.878033   29681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 20:20:29.888161   29681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 20:20:29.905086   29681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:20:29.921293   29681 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I1212 20:20:29.925277   29681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:20:29.936926   29681 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:20:29.937240   29681 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:20:29.937323   29681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:20:29.937378   29681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:20:29.951311   29681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I1212 20:20:29.951706   29681 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:20:29.952120   29681 main.go:141] libmachine: Using API Version  1
	I1212 20:20:29.952143   29681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:20:29.952483   29681 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:20:29.952671   29681 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:20:29.952833   29681 start.go:304] JoinCluster: &{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:20:29.952919   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 20:20:29.952936   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:20:29.955860   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:20:29.956259   29681 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:20:29.956286   29681 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:20:29.956433   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:20:29.956608   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:20:29.956780   29681 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:20:29.956919   29681 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:20:30.137820   29681 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 75aw1m.b4274jwzfimxowur --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 20:20:30.142237   29681 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:20:30.142279   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 75aw1m.b4274jwzfimxowur --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-562818-m02"
	I1212 20:20:30.192829   29681 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 20:20:30.373120   29681 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 20:20:30.373161   29681 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 20:20:30.409239   29681 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:20:30.409274   29681 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:20:30.409282   29681 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 20:20:30.527186   29681 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 20:20:33.542909   29681 command_runner.go:130] > This node has joined the cluster:
	I1212 20:20:33.542942   29681 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 20:20:33.542952   29681 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 20:20:33.542962   29681 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 20:20:33.544768   29681 command_runner.go:130] ! W1212 20:20:30.183728     819 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 20:20:33.544795   29681 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:20:33.544823   29681 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 75aw1m.b4274jwzfimxowur --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-562818-m02": (3.402528519s)
	I1212 20:20:33.544845   29681 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 20:20:33.807111   29681 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1212 20:20:33.807259   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=multinode-562818 minikube.k8s.io/updated_at=2023_12_12T20_20_33_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:20:33.919162   29681 command_runner.go:130] > node/multinode-562818-m02 labeled
	I1212 20:20:33.920835   29681 start.go:306] JoinCluster complete in 3.968000595s
	I1212 20:20:33.920857   29681 cni.go:84] Creating CNI manager for ""
	I1212 20:20:33.920863   29681 cni.go:136] 2 nodes found, recommending kindnet
	I1212 20:20:33.920904   29681 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:20:33.925798   29681 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 20:20:33.925829   29681 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 20:20:33.925838   29681 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 20:20:33.925846   29681 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:20:33.925855   29681 command_runner.go:130] > Access: 2023-12-12 20:19:04.330369533 +0000
	I1212 20:20:33.925863   29681 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 20:20:33.925871   29681 command_runner.go:130] > Change: 2023-12-12 20:19:02.458369533 +0000
	I1212 20:20:33.925882   29681 command_runner.go:130] >  Birth: -
	I1212 20:20:33.925956   29681 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 20:20:33.925977   29681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 20:20:33.945558   29681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:20:34.239922   29681 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:20:34.244532   29681 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:20:34.250719   29681 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 20:20:34.268781   29681 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 20:20:34.271618   29681 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:20:34.271815   29681 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:20:34.272116   29681 round_trippers.go:463] GET https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:20:34.272132   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:34.272139   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:34.272149   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:34.274652   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:34.274672   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:34.274680   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:34.274685   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:34.274690   29681 round_trippers.go:580]     Content-Length: 291
	I1212 20:20:34.274695   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:34 GMT
	I1212 20:20:34.274701   29681 round_trippers.go:580]     Audit-Id: a272b859-ac62-477f-8d39-72197db74831
	I1212 20:20:34.274706   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:34.274711   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:34.274738   29681 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"414","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 20:20:34.274819   29681 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-562818" context rescaled to 1 replicas
	I1212 20:20:34.274845   29681 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:20:34.289445   29681 out.go:177] * Verifying Kubernetes components...
	I1212 20:20:34.291390   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:20:34.319748   29681 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:20:34.320084   29681 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:20:34.320338   29681 node_ready.go:35] waiting up to 6m0s for node "multinode-562818-m02" to be "Ready" ...
	I1212 20:20:34.320411   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:34.320422   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:34.320432   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:34.320439   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:34.323854   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:34.323878   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:34.323888   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:34.323897   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:34.323905   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:34.323912   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:34.323920   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:34.323928   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:34 GMT
	I1212 20:20:34.323940   29681 round_trippers.go:580]     Audit-Id: 0c012c17-8801-4ebf-83f9-02cd33d9b515
	I1212 20:20:34.324032   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:34.324382   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:34.324401   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:34.324413   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:34.324422   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:34.328062   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:34.328088   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:34.328098   29681 round_trippers.go:580]     Audit-Id: c5689ed1-2af4-4c58-bc56-00b9b58325cc
	I1212 20:20:34.328107   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:34.328114   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:34.328123   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:34.328132   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:34.328141   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:34.328149   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:34 GMT
	I1212 20:20:34.328245   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:34.829352   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:34.829380   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:34.829388   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:34.829394   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:34.832494   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:34.832524   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:34.832533   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:34 GMT
	I1212 20:20:34.832541   29681 round_trippers.go:580]     Audit-Id: 519fc911-10c2-4126-aed1-42ffb252db1e
	I1212 20:20:34.832549   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:34.832556   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:34.832564   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:34.832577   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:34.832585   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:34.832689   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:35.328687   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:35.328720   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:35.328731   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:35.328740   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:35.333638   29681 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:20:35.333674   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:35.333685   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:35.333699   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:35 GMT
	I1212 20:20:35.333708   29681 round_trippers.go:580]     Audit-Id: 5124f9bb-fbb8-4c9c-94b0-0dd76434fc79
	I1212 20:20:35.333722   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:35.333731   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:35.333744   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:35.333753   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:35.333878   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:35.829414   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:35.829442   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:35.829451   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:35.829461   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:35.832333   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:35.832363   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:35.832379   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:35.832390   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:35 GMT
	I1212 20:20:35.832400   29681 round_trippers.go:580]     Audit-Id: 652fefd3-abf3-4065-b624-0b7353b4bfa7
	I1212 20:20:35.832410   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:35.832419   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:35.832433   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:35.832444   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:35.832498   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:36.328694   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:36.328716   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:36.328725   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:36.328731   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:36.332029   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:36.332053   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:36.332061   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:36.332066   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:36 GMT
	I1212 20:20:36.332071   29681 round_trippers.go:580]     Audit-Id: f1d4169c-bb2f-438f-9214-072a613e5ca1
	I1212 20:20:36.332077   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:36.332083   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:36.332100   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:36.332109   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:36.332198   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:36.332559   29681 node_ready.go:58] node "multinode-562818-m02" has status "Ready":"False"
	I1212 20:20:36.829370   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:36.829393   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:36.829401   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:36.829407   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:36.832159   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:36.832186   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:36.832197   29681 round_trippers.go:580]     Audit-Id: 9c54a094-6b7e-4eb0-a785-a691d6dbe857
	I1212 20:20:36.832203   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:36.832211   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:36.832218   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:36.832226   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:36.832234   29681 round_trippers.go:580]     Content-Length: 4082
	I1212 20:20:36.832245   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:36 GMT
	I1212 20:20:36.832339   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"468","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 20:20:37.328904   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:37.328939   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:37.328949   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:37.328956   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:37.331977   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:37.331999   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:37.332007   29681 round_trippers.go:580]     Audit-Id: 5356a924-92a2-4213-8fd5-ef91e600b6cb
	I1212 20:20:37.332013   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:37.332018   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:37.332023   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:37.332028   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:37.332033   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:37 GMT
	I1212 20:20:37.332521   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:37.829175   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:37.829208   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:37.829222   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:37.829230   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:37.832616   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:37.832643   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:37.832653   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:37.832662   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:37.832670   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:37 GMT
	I1212 20:20:37.832678   29681 round_trippers.go:580]     Audit-Id: 7a6e3179-0eff-4758-9f59-7f7aad7fe880
	I1212 20:20:37.832686   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:37.832697   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:37.832815   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:38.329441   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:38.329477   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:38.329485   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:38.329492   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:38.333174   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:38.333200   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:38.333210   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:38.333218   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:38.333227   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:38 GMT
	I1212 20:20:38.333235   29681 round_trippers.go:580]     Audit-Id: b18dfe38-680d-42aa-8955-c69d2ed885dd
	I1212 20:20:38.333243   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:38.333261   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:38.333588   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:38.333894   29681 node_ready.go:58] node "multinode-562818-m02" has status "Ready":"False"
	I1212 20:20:38.828910   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:38.828932   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:38.828940   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:38.828947   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:38.831843   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:38.831881   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:38.831892   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:38.831902   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:38.831911   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:38.831920   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:38.831930   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:38 GMT
	I1212 20:20:38.831942   29681 round_trippers.go:580]     Audit-Id: 5a35afd1-e50f-4c9b-a7ce-e6ec87423774
	I1212 20:20:38.832124   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:39.328720   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:39.328750   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:39.328761   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:39.328769   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:39.331254   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:39.331271   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:39.331280   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:39.331287   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:39.331294   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:39.331302   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:39 GMT
	I1212 20:20:39.331310   29681 round_trippers.go:580]     Audit-Id: 064faec1-df5c-4ac3-9371-a69d89a0b0f3
	I1212 20:20:39.331321   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:39.331702   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:39.829430   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:39.829457   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:39.829470   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:39.829476   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:39.833125   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:39.833146   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:39.833157   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:39.833164   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:39.833171   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:39 GMT
	I1212 20:20:39.833179   29681 round_trippers.go:580]     Audit-Id: 59df8e69-135f-4d55-aaee-59f71f9c4ac1
	I1212 20:20:39.833205   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:39.833215   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:39.834258   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:40.329599   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:40.329627   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:40.329635   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:40.329641   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:40.335031   29681 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 20:20:40.335057   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:40.335067   29681 round_trippers.go:580]     Audit-Id: 3f32137b-9d9f-464a-a5ba-1d7f7f7c5b1d
	I1212 20:20:40.335077   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:40.335085   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:40.335094   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:40.335102   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:40.335109   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:40 GMT
	I1212 20:20:40.336311   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:40.336563   29681 node_ready.go:58] node "multinode-562818-m02" has status "Ready":"False"
	I1212 20:20:40.828961   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:40.828990   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:40.828999   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:40.829005   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:40.833694   29681 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:20:40.833723   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:40.833734   29681 round_trippers.go:580]     Audit-Id: 41b3aee8-3c8d-43ce-8231-2450371fe9ee
	I1212 20:20:40.833744   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:40.833752   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:40.833765   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:40.833778   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:40.833789   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:40 GMT
	I1212 20:20:40.834598   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"475","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1212 20:20:41.328976   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:41.329000   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.329009   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.329015   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.331266   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:41.331287   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.331295   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.331303   29681 round_trippers.go:580]     Audit-Id: e7951470-349d-45c2-8281-8491bbba4921
	I1212 20:20:41.331311   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.331319   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.331324   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.331329   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.331662   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"492","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I1212 20:20:41.331893   29681 node_ready.go:49] node "multinode-562818-m02" has status "Ready":"True"
	I1212 20:20:41.331903   29681 node_ready.go:38] duration metric: took 7.01155003s waiting for node "multinode-562818-m02" to be "Ready" ...
	I1212 20:20:41.331913   29681 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:20:41.331958   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:20:41.331962   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.331969   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.331975   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.335259   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:41.335275   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.335283   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.335297   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.335305   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.335314   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.335323   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.335330   29681 round_trippers.go:580]     Audit-Id: 604bdf09-8734-41e7-ac35-81996d96f4ca
	I1212 20:20:41.336858   29681 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"410","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67324 chars]
	I1212 20:20:41.338780   29681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.338870   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:20:41.338879   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.338886   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.338892   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.340918   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:41.340933   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.340939   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.340944   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.340949   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.340954   29681 round_trippers.go:580]     Audit-Id: 4b355f7c-ae87-49ba-a10c-a639861a2a49
	I1212 20:20:41.340959   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.340964   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.341125   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"410","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 20:20:41.341490   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:41.341501   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.341508   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.341514   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.343325   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:20:41.343338   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.343343   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.343349   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.343354   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.343359   29681 round_trippers.go:580]     Audit-Id: 0aa72e9d-1d11-4d9f-8436-0394c94bd6d7
	I1212 20:20:41.343364   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.343369   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.343700   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:20:41.343963   29681 pod_ready.go:92] pod "coredns-5dd5756b68-689lp" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:41.343977   29681 pod_ready.go:81] duration metric: took 5.178651ms waiting for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.343985   29681 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.344027   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-562818
	I1212 20:20:41.344034   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.344040   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.344046   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.345800   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:20:41.345823   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.345839   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.345849   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.345860   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.345869   29681 round_trippers.go:580]     Audit-Id: b9ad3ce0-a6fb-4a4f-98ce-22903b6cf957
	I1212 20:20:41.345874   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.345882   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.346031   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"363","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 20:20:41.346485   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:41.346507   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.346518   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.346526   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.348594   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:41.348608   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.348615   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.348622   29681 round_trippers.go:580]     Audit-Id: 21e6112a-bee1-4b00-b689-4f5990a14d92
	I1212 20:20:41.348630   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.348639   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.348646   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.348656   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.348810   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:20:41.349191   29681 pod_ready.go:92] pod "etcd-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:41.349210   29681 pod_ready.go:81] duration metric: took 5.219988ms waiting for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.349223   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.349264   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:20:41.349271   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.349278   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.349284   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.350940   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:20:41.350952   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.350958   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.350963   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.350968   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.350974   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.350981   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.350991   29681 round_trippers.go:580]     Audit-Id: c3978464-8308-4426-ad5f-0ffaee519457
	I1212 20:20:41.351228   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"398","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 20:20:41.351663   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:41.351678   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.351685   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.351690   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.353438   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:20:41.353450   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.353455   29681 round_trippers.go:580]     Audit-Id: bcd523a8-755d-4f0b-b672-56b487d791af
	I1212 20:20:41.353462   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.353468   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.353473   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.353480   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.353489   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.353679   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:20:41.353953   29681 pod_ready.go:92] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:41.353967   29681 pod_ready.go:81] duration metric: took 4.738852ms waiting for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.353974   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.354026   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-562818
	I1212 20:20:41.354038   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.354049   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.354058   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.355792   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:20:41.355811   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.355820   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.355828   29681 round_trippers.go:580]     Audit-Id: eab2de8e-6a1a-41c5-93bd-965babd900ad
	I1212 20:20:41.355836   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.355851   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.355860   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.355868   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.355981   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-562818","namespace":"kube-system","uid":"23b73a4b-e188-4b7c-a13d-1fd61862a4e1","resourceVersion":"399","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.mirror":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.seen":"2023-12-12T20:19:35.712598374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 20:20:41.356386   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:41.356400   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.356407   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.356413   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.357999   29681 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:20:41.358012   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.358017   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.358023   29681 round_trippers.go:580]     Audit-Id: 1170f7f3-1408-4a73-914a-bc75ec852e1c
	I1212 20:20:41.358032   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.358044   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.358052   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.358060   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.358225   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:20:41.358569   29681 pod_ready.go:92] pod "kube-controller-manager-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:41.358590   29681 pod_ready.go:81] duration metric: took 4.609428ms waiting for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.358601   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.529999   29681 request.go:629] Waited for 171.341315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:20:41.530060   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:20:41.530066   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.530073   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.530080   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.532918   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:41.532947   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.532957   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.532965   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.532974   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.532983   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.532991   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.533000   29681 round_trippers.go:580]     Audit-Id: 821b098c-38cb-4fbe-8c67-cb135652f9f0
	I1212 20:20:41.533151   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4rrmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"2bcd718f-0c7c-461a-895e-44a0c1d566fd","resourceVersion":"378","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 20:20:41.729995   29681 request.go:629] Waited for 196.379721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:41.730066   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:41.730071   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.730078   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.730085   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.733287   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:41.733311   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.733321   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.733328   29681 round_trippers.go:580]     Audit-Id: 0855d418-20a8-4713-8d30-774d2a8b0cda
	I1212 20:20:41.733336   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.733342   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.733349   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.733356   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.734209   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:20:41.734543   29681 pod_ready.go:92] pod "kube-proxy-4rrmn" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:41.734560   29681 pod_ready.go:81] duration metric: took 375.950647ms waiting for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.734572   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:41.930048   29681 request.go:629] Waited for 195.39045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:20:41.930108   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:20:41.930113   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:41.930121   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:41.930127   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:41.933903   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:41.933928   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:41.933937   29681 round_trippers.go:580]     Audit-Id: cd98e3f9-3967-4f08-9b20-6dbe3a1e006e
	I1212 20:20:41.933946   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:41.933953   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:41.933961   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:41.933967   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:41.933973   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:41 GMT
	I1212 20:20:41.934158   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sxw8h","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f281e87-2597-4bd0-8ca4-cd7556c0a8e4","resourceVersion":"481","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 20:20:42.129980   29681 request.go:629] Waited for 195.390728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:42.130068   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:20:42.130075   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:42.130086   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:42.130095   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:42.132970   29681 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:20:42.132992   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:42.132998   29681 round_trippers.go:580]     Audit-Id: 50351493-6531-44e6-9acc-4395ac8f2e64
	I1212 20:20:42.133004   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:42.133014   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:42.133019   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:42.133024   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:42.133029   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:42 GMT
	I1212 20:20:42.133661   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"492","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_20_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I1212 20:20:42.133929   29681 pod_ready.go:92] pod "kube-proxy-sxw8h" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:42.133946   29681 pod_ready.go:81] duration metric: took 399.365723ms waiting for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:42.133955   29681 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:42.329445   29681 request.go:629] Waited for 195.410139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:20:42.329515   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:20:42.329523   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:42.329533   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:42.329543   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:42.332772   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:42.332801   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:42.332808   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:42.332814   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:42.332819   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:42 GMT
	I1212 20:20:42.332824   29681 round_trippers.go:580]     Audit-Id: 31fa5b05-32d6-4dc2-8f08-d78a7ba1edc9
	I1212 20:20:42.332829   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:42.332834   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:42.333463   29681 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"400","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 20:20:42.529113   29681 request.go:629] Waited for 195.289691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:42.529182   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:20:42.529187   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:42.529195   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:42.529203   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:42.532376   29681 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:20:42.532400   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:42.532409   29681 round_trippers.go:580]     Audit-Id: cdfd3002-e9cd-4c06-941f-937c9fa4949d
	I1212 20:20:42.532417   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:42.532425   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:42.532432   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:42.532439   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:42.532456   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:42 GMT
	I1212 20:20:42.532672   29681 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 20:20:42.533002   29681 pod_ready.go:92] pod "kube-scheduler-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:20:42.533020   29681 pod_ready.go:81] duration metric: took 399.058598ms waiting for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:20:42.533029   29681 pod_ready.go:38] duration metric: took 1.201107733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:20:42.533053   29681 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:20:42.533095   29681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:20:42.549165   29681 system_svc.go:56] duration metric: took 16.106312ms WaitForService to wait for kubelet.
	I1212 20:20:42.549199   29681 kubeadm.go:581] duration metric: took 8.27432826s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:20:42.549218   29681 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:20:42.729641   29681 request.go:629] Waited for 180.35682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I1212 20:20:42.729722   29681 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I1212 20:20:42.729729   29681 round_trippers.go:469] Request Headers:
	I1212 20:20:42.729741   29681 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:20:42.729755   29681 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:20:42.734124   29681 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:20:42.734147   29681 round_trippers.go:577] Response Headers:
	I1212 20:20:42.734155   29681 round_trippers.go:580]     Audit-Id: c0854707-0140-44c9-b2bc-1196fe1477f9
	I1212 20:20:42.734161   29681 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:20:42.734166   29681 round_trippers.go:580]     Content-Type: application/json
	I1212 20:20:42.734171   29681 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:20:42.734176   29681 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:20:42.734182   29681 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:20:42 GMT
	I1212 20:20:42.734371   29681 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"494"},"items":[{"metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"390","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10076 chars]
	I1212 20:20:42.734864   29681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:20:42.734884   29681 node_conditions.go:123] node cpu capacity is 2
	I1212 20:20:42.734894   29681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:20:42.734898   29681 node_conditions.go:123] node cpu capacity is 2
	I1212 20:20:42.734903   29681 node_conditions.go:105] duration metric: took 185.680672ms to run NodePressure ...
	I1212 20:20:42.734916   29681 start.go:228] waiting for startup goroutines ...
	I1212 20:20:42.734941   29681 start.go:242] writing updated cluster config ...
	I1212 20:20:42.735222   29681 ssh_runner.go:195] Run: rm -f paused
	I1212 20:20:42.783107   29681 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 20:20:42.786532   29681 out.go:177] * Done! kubectl is now configured to use "multinode-562818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 20:19:03 UTC, ends at Tue 2023-12-12 20:20:49 UTC. --
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.480753077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412449480737785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=daa9f127-f072-4303-b687-c28060e904ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.481575875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e09ea465-aee8-4ed1-9925-9643c9cf9c27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.481623539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e09ea465-aee8-4ed1-9925-9643c9cf9c27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.481793048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54029f240b124fc97fc65f6a2db15d39f2833620b7bdc437511d993c2266687b,PodSandboxId:209819b5b8ed0e0263c2c1c3b059383a88417ebfbc5bfee4a9f83d36b2bb6694,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702412445378588064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f104840b613bc3a364b5229a9c99a3d10a1816f1b360d3469f9d5a836fac9d8b,PodSandboxId:511a153d9b1821997fb33d650b2861383c8577dc8cc712dfa99f863fa7626408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702412395922343719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2959568b1415630f662242714bc5ce7fb54d2440a2b1dd74d19c1ae258658a3,PodSandboxId:48d83378da53df27a20aae8a7c5a47338509c1c2ed648bbd6a90135015a50e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702412395740732317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03be7e04a12b9e042baefe1eefb78f4b7c1950d7a791627453d1d767434bf99d,PodSandboxId:8ef0b9734d565d67c91267c432edbd5df7ca7b44d71b22f6ef72100045e8c7d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702412392976133032,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7744a262602873099b644550a73caa775fb3e7851c866ebdbb54f6a39c00764,PodSandboxId:862864bb7345c293ccae2f90a0d4fc312b8b3bcb1fa1119991f3fea9ecfc8ebc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702412390733593168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1
d566fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8141638158b79b1877d35d115dcf0d65186ff1b8850545b08ecf73e63a4bfd,PodSandboxId:3ac1dffe4514076f990b38899a5e1ce0a6c4b50187dd457d1c0c30cd54e6223f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412368499005508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes
.container.hash: fcfc309f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d417b39813c3bebd3548bf5ea5e40824b42c0e084cfe8ea373ace50045e8e0c5,PodSandboxId:32d9d085a52f6434f14317d12289cd17632901a3c58f98cc3f6ef608abf87df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412368391016000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557,PodSandboxId:fe916696b6eca9b2d4a8d34b86db414077c16600d848267df7287c777a04df72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412368095327344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f8ce
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e153eaaaeac71226fe3033a5cd50190bdc190a6d1bc537f27b10ba2d4b5ebb09,PodSandboxId:37615bc8fbb59f0bbb5832ef3483022c679a92504bb241286be9b86fbae8a698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412367981733131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e09ea465-aee8-4ed1-9925-9643c9cf9c27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.523281216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d778f01f-7899-4fd7-804c-7675e1b11a51 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.523364277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d778f01f-7899-4fd7-804c-7675e1b11a51 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.524835507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=643e3f4c-d4f1-4897-ad4d-4151b95efbaf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.525217052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412449525203920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=643e3f4c-d4f1-4897-ad4d-4151b95efbaf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.525797557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d4d17c5-9534-42dc-bb0a-9f04f2d21277 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.525870989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d4d17c5-9534-42dc-bb0a-9f04f2d21277 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.526066890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54029f240b124fc97fc65f6a2db15d39f2833620b7bdc437511d993c2266687b,PodSandboxId:209819b5b8ed0e0263c2c1c3b059383a88417ebfbc5bfee4a9f83d36b2bb6694,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702412445378588064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f104840b613bc3a364b5229a9c99a3d10a1816f1b360d3469f9d5a836fac9d8b,PodSandboxId:511a153d9b1821997fb33d650b2861383c8577dc8cc712dfa99f863fa7626408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702412395922343719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2959568b1415630f662242714bc5ce7fb54d2440a2b1dd74d19c1ae258658a3,PodSandboxId:48d83378da53df27a20aae8a7c5a47338509c1c2ed648bbd6a90135015a50e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702412395740732317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03be7e04a12b9e042baefe1eefb78f4b7c1950d7a791627453d1d767434bf99d,PodSandboxId:8ef0b9734d565d67c91267c432edbd5df7ca7b44d71b22f6ef72100045e8c7d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702412392976133032,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7744a262602873099b644550a73caa775fb3e7851c866ebdbb54f6a39c00764,PodSandboxId:862864bb7345c293ccae2f90a0d4fc312b8b3bcb1fa1119991f3fea9ecfc8ebc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702412390733593168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1
d566fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8141638158b79b1877d35d115dcf0d65186ff1b8850545b08ecf73e63a4bfd,PodSandboxId:3ac1dffe4514076f990b38899a5e1ce0a6c4b50187dd457d1c0c30cd54e6223f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412368499005508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes
.container.hash: fcfc309f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d417b39813c3bebd3548bf5ea5e40824b42c0e084cfe8ea373ace50045e8e0c5,PodSandboxId:32d9d085a52f6434f14317d12289cd17632901a3c58f98cc3f6ef608abf87df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412368391016000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557,PodSandboxId:fe916696b6eca9b2d4a8d34b86db414077c16600d848267df7287c777a04df72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412368095327344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f8ce
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e153eaaaeac71226fe3033a5cd50190bdc190a6d1bc537f27b10ba2d4b5ebb09,PodSandboxId:37615bc8fbb59f0bbb5832ef3483022c679a92504bb241286be9b86fbae8a698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412367981733131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d4d17c5-9534-42dc-bb0a-9f04f2d21277 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.565497930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5c5836a2-58cb-4e58-8d60-7eb5022561e8 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.565606719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5c5836a2-58cb-4e58-8d60-7eb5022561e8 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.566913754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=57621a93-a0ba-403d-ac1c-0f17bafaab8c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.567345256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412449567332807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=57621a93-a0ba-403d-ac1c-0f17bafaab8c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.568305681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=37cecf37-5f1b-47af-b16d-eb06a4e69a74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.568356178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=37cecf37-5f1b-47af-b16d-eb06a4e69a74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.568607637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54029f240b124fc97fc65f6a2db15d39f2833620b7bdc437511d993c2266687b,PodSandboxId:209819b5b8ed0e0263c2c1c3b059383a88417ebfbc5bfee4a9f83d36b2bb6694,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702412445378588064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f104840b613bc3a364b5229a9c99a3d10a1816f1b360d3469f9d5a836fac9d8b,PodSandboxId:511a153d9b1821997fb33d650b2861383c8577dc8cc712dfa99f863fa7626408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702412395922343719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2959568b1415630f662242714bc5ce7fb54d2440a2b1dd74d19c1ae258658a3,PodSandboxId:48d83378da53df27a20aae8a7c5a47338509c1c2ed648bbd6a90135015a50e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702412395740732317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03be7e04a12b9e042baefe1eefb78f4b7c1950d7a791627453d1d767434bf99d,PodSandboxId:8ef0b9734d565d67c91267c432edbd5df7ca7b44d71b22f6ef72100045e8c7d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702412392976133032,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7744a262602873099b644550a73caa775fb3e7851c866ebdbb54f6a39c00764,PodSandboxId:862864bb7345c293ccae2f90a0d4fc312b8b3bcb1fa1119991f3fea9ecfc8ebc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702412390733593168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1
d566fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8141638158b79b1877d35d115dcf0d65186ff1b8850545b08ecf73e63a4bfd,PodSandboxId:3ac1dffe4514076f990b38899a5e1ce0a6c4b50187dd457d1c0c30cd54e6223f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412368499005508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes
.container.hash: fcfc309f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d417b39813c3bebd3548bf5ea5e40824b42c0e084cfe8ea373ace50045e8e0c5,PodSandboxId:32d9d085a52f6434f14317d12289cd17632901a3c58f98cc3f6ef608abf87df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412368391016000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557,PodSandboxId:fe916696b6eca9b2d4a8d34b86db414077c16600d848267df7287c777a04df72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412368095327344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f8ce
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e153eaaaeac71226fe3033a5cd50190bdc190a6d1bc537f27b10ba2d4b5ebb09,PodSandboxId:37615bc8fbb59f0bbb5832ef3483022c679a92504bb241286be9b86fbae8a698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412367981733131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=37cecf37-5f1b-47af-b16d-eb06a4e69a74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.605969753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b576b515-8b8d-4f44-9be7-41f60f9576fe name=/runtime.v1.RuntimeService/Version
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.606031016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b576b515-8b8d-4f44-9be7-41f60f9576fe name=/runtime.v1.RuntimeService/Version
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.607208147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=85705d24-3ed9-456f-a639-28ac3f3bec88 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.607713772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702412449607696902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=85705d24-3ed9-456f-a639-28ac3f3bec88 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.608707023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d889ea1-2af9-4fd9-8096-32d0ec496f13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.608765164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d889ea1-2af9-4fd9-8096-32d0ec496f13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:20:49 multinode-562818 crio[716]: time="2023-12-12 20:20:49.608942030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54029f240b124fc97fc65f6a2db15d39f2833620b7bdc437511d993c2266687b,PodSandboxId:209819b5b8ed0e0263c2c1c3b059383a88417ebfbc5bfee4a9f83d36b2bb6694,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702412445378588064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f104840b613bc3a364b5229a9c99a3d10a1816f1b360d3469f9d5a836fac9d8b,PodSandboxId:511a153d9b1821997fb33d650b2861383c8577dc8cc712dfa99f863fa7626408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702412395922343719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2959568b1415630f662242714bc5ce7fb54d2440a2b1dd74d19c1ae258658a3,PodSandboxId:48d83378da53df27a20aae8a7c5a47338509c1c2ed648bbd6a90135015a50e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702412395740732317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03be7e04a12b9e042baefe1eefb78f4b7c1950d7a791627453d1d767434bf99d,PodSandboxId:8ef0b9734d565d67c91267c432edbd5df7ca7b44d71b22f6ef72100045e8c7d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702412392976133032,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7744a262602873099b644550a73caa775fb3e7851c866ebdbb54f6a39c00764,PodSandboxId:862864bb7345c293ccae2f90a0d4fc312b8b3bcb1fa1119991f3fea9ecfc8ebc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702412390733593168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1
d566fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8141638158b79b1877d35d115dcf0d65186ff1b8850545b08ecf73e63a4bfd,PodSandboxId:3ac1dffe4514076f990b38899a5e1ce0a6c4b50187dd457d1c0c30cd54e6223f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412368499005508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes
.container.hash: fcfc309f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d417b39813c3bebd3548bf5ea5e40824b42c0e084cfe8ea373ace50045e8e0c5,PodSandboxId:32d9d085a52f6434f14317d12289cd17632901a3c58f98cc3f6ef608abf87df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412368391016000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557,PodSandboxId:fe916696b6eca9b2d4a8d34b86db414077c16600d848267df7287c777a04df72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412368095327344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f8ce
a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e153eaaaeac71226fe3033a5cd50190bdc190a6d1bc537f27b10ba2d4b5ebb09,PodSandboxId:37615bc8fbb59f0bbb5832ef3483022c679a92504bb241286be9b86fbae8a698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412367981733131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d889ea1-2af9-4fd9-8096-32d0ec496f13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	54029f240b124       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   209819b5b8ed0       busybox-5bc68d56bd-9wvsx
	f104840b613bc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      53 seconds ago       Running             coredns                   0                   511a153d9b182       coredns-5dd5756b68-689lp
	c2959568b1415       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      53 seconds ago       Running             storage-provisioner       0                   48d83378da53d       storage-provisioner
	03be7e04a12b9       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      56 seconds ago       Running             kindnet-cni               0                   8ef0b9734d565       kindnet-24p9c
	b7744a2626028       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      58 seconds ago       Running             kube-proxy                0                   862864bb7345c       kube-proxy-4rrmn
	bc8141638158b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   3ac1dffe45140       etcd-multinode-562818
	d417b39813c3b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   32d9d085a52f6       kube-scheduler-multinode-562818
	81e44fb96ef3b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   fe916696b6eca       kube-apiserver-multinode-562818
	e153eaaaeac71       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   37615bc8fbb59       kube-controller-manager-multinode-562818
	
	
	==> coredns [f104840b613bc3a364b5229a9c99a3d10a1816f1b360d3469f9d5a836fac9d8b] <==
	[INFO] 10.244.0.3:47726 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116629s
	[INFO] 10.244.1.2:50294 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218185s
	[INFO] 10.244.1.2:56644 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002495278s
	[INFO] 10.244.1.2:43228 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097512s
	[INFO] 10.244.1.2:38872 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140339s
	[INFO] 10.244.1.2:59300 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001705439s
	[INFO] 10.244.1.2:52766 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098183s
	[INFO] 10.244.1.2:41167 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103405s
	[INFO] 10.244.1.2:45561 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168294s
	[INFO] 10.244.0.3:47250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250511s
	[INFO] 10.244.0.3:51532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083146s
	[INFO] 10.244.0.3:57479 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105431s
	[INFO] 10.244.0.3:58641 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068225s
	[INFO] 10.244.1.2:43332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143885s
	[INFO] 10.244.1.2:58581 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137048s
	[INFO] 10.244.1.2:39333 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223379s
	[INFO] 10.244.1.2:42321 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118428s
	[INFO] 10.244.0.3:45322 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151382s
	[INFO] 10.244.0.3:34607 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000366905s
	[INFO] 10.244.0.3:36206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123929s
	[INFO] 10.244.0.3:51346 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120159s
	[INFO] 10.244.1.2:46812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165186s
	[INFO] 10.244.1.2:47283 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203257s
	[INFO] 10.244.1.2:41854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110527s
	[INFO] 10.244.1.2:41344 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102918s
	
	
	==> describe nodes <==
	Name:               multinode-562818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-562818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-562818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T20_19_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:19:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-562818
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:20:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:19:54 +0000   Tue, 12 Dec 2023 20:19:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:19:54 +0000   Tue, 12 Dec 2023 20:19:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:19:54 +0000   Tue, 12 Dec 2023 20:19:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:19:54 +0000   Tue, 12 Dec 2023 20:19:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    multinode-562818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 477de0dffe274051ae282f465573daea
	  System UUID:                477de0df-fe27-4051-ae28-2f465573daea
	  Boot ID:                    869ef52a-8591-4569-b273-c4259c6f3d1f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9wvsx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-689lp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-multinode-562818                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-24p9c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      61s
	  kube-system                 kube-apiserver-multinode-562818             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-multinode-562818    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-4rrmn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-multinode-562818             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 58s   kube-proxy       
	  Normal  Starting                 74s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node multinode-562818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node multinode-562818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node multinode-562818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s   node-controller  Node multinode-562818 event: Registered Node multinode-562818 in Controller
	  Normal  NodeReady                55s   kubelet          Node multinode-562818 status is now: NodeReady
	
	
	Name:               multinode-562818-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-562818-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-562818
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T20_20_33_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:20:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-562818-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:20:41 +0000   Tue, 12 Dec 2023 20:20:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:20:41 +0000   Tue, 12 Dec 2023 20:20:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:20:41 +0000   Tue, 12 Dec 2023 20:20:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:20:41 +0000   Tue, 12 Dec 2023 20:20:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    multinode-562818-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bca6f1b61c874e68865500389e098c63
	  System UUID:                bca6f1b6-1c87-4e68-8655-00389e098c63
	  Boot ID:                    f4dccdc7-2ac9-4612-b312-e1bdd16bc5ef
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-vbpn5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-cmz7d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-proxy-sxw8h            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  16s (x5 over 18s)  kubelet          Node multinode-562818-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x5 over 18s)  kubelet          Node multinode-562818-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x5 over 18s)  kubelet          Node multinode-562818-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node multinode-562818-m02 event: Registered Node multinode-562818-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-562818-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec12 20:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.401727] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec12 20:19] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150019] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.005642] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.135771] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.101085] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.138069] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.100600] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.212457] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.959198] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +8.780913] systemd-fstab-generator[1261]: Ignoring "noauto" for root device
	[ +21.587174] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [bc8141638158b79b1877d35d115dcf0d65186ff1b8850545b08ecf73e63a4bfd] <==
	{"level":"info","ts":"2023-12-12T20:19:30.198689Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:19:30.198944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 switched to configuration voters=(2477931171060957778)"}
	{"level":"info","ts":"2023-12-12T20:19:30.199052Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","added-peer-id":"226361457cf4c252","added-peer-peer-urls":["https://192.168.39.77:2380"]}
	{"level":"info","ts":"2023-12-12T20:19:30.940967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T20:19:30.941029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T20:19:30.941045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgPreVoteResp from 226361457cf4c252 at term 1"}
	{"level":"info","ts":"2023-12-12T20:19:30.941057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T20:19:30.941063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgVoteResp from 226361457cf4c252 at term 2"}
	{"level":"info","ts":"2023-12-12T20:19:30.941071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T20:19:30.941078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226361457cf4c252 elected leader 226361457cf4c252 at term 2"}
	{"level":"info","ts":"2023-12-12T20:19:30.942488Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"226361457cf4c252","local-member-attributes":"{Name:multinode-562818 ClientURLs:[https://192.168.39.77:2379]}","request-path":"/0/members/226361457cf4c252/attributes","cluster-id":"b43d13dd46d94ad8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:19:30.942667Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:19:30.943868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:19:30.94403Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:19:30.944174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:19:30.945212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.77:2379"}
	{"level":"info","ts":"2023-12-12T20:19:30.945303Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:19:30.94533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T20:19:30.965562Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:19:30.968716Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:19:30.968794Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:20:31.660919Z","caller":"traceutil/trace.go:171","msg":"trace[1222120561] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"150.659946ms","start":"2023-12-12T20:20:31.510145Z","end":"2023-12-12T20:20:31.660805Z","steps":["trace[1222120561] 'process raft request'  (duration: 150.505863ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T20:20:31.924256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.35457ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14002408534038339697 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-5kgqp\" mod_revision:443 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-5kgqp\" value_size:1264 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-5kgqp\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T20:20:31.924364Z","caller":"traceutil/trace.go:171","msg":"trace[1415838992] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"253.711256ms","start":"2023-12-12T20:20:31.670643Z","end":"2023-12-12T20:20:31.924354Z","steps":["trace[1415838992] 'process raft request'  (duration: 114.634638ms)","trace[1415838992] 'compare'  (duration: 138.012301ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T20:20:32.17588Z","caller":"traceutil/trace.go:171","msg":"trace[1273734234] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"245.076802ms","start":"2023-12-12T20:20:31.930787Z","end":"2023-12-12T20:20:32.175864Z","steps":["trace[1273734234] 'process raft request'  (duration: 167.216106ms)","trace[1273734234] 'compare'  (duration: 77.785636ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:20:49 up 1 min,  0 users,  load average: 0.63, 0.30, 0.11
	Linux multinode-562818 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [03be7e04a12b9e042baefe1eefb78f4b7c1950d7a791627453d1d767434bf99d] <==
	I1212 20:19:53.824323       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 20:19:53.824517       1 main.go:107] hostIP = 192.168.39.77
	podIP = 192.168.39.77
	I1212 20:19:53.824888       1 main.go:116] setting mtu 1500 for CNI 
	I1212 20:19:53.824953       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 20:19:53.825067       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 20:19:54.528107       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:19:54.528247       1 main.go:227] handling current node
	I1212 20:20:04.548608       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:20:04.548656       1 main.go:227] handling current node
	I1212 20:20:14.558112       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:20:14.558208       1 main.go:227] handling current node
	I1212 20:20:24.562930       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:20:24.563031       1 main.go:227] handling current node
	I1212 20:20:34.568790       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:20:34.568887       1 main.go:227] handling current node
	I1212 20:20:34.568920       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 20:20:34.568939       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	I1212 20:20:34.569117       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.65 Flags: [] Table: 0} 
	I1212 20:20:44.574148       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:20:44.574232       1 main.go:227] handling current node
	I1212 20:20:44.574264       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 20:20:44.574282       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557] <==
	I1212 20:19:32.394367       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 20:19:32.437326       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:19:32.437627       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:19:32.437656       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:19:32.437680       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:19:32.437702       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:19:32.463802       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:19:32.479210       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 20:19:32.479225       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:19:32.486020       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:19:33.296952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:19:33.302616       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:19:33.303043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:19:33.923610       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:19:33.991123       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:19:34.115591       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:19:34.122673       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.77]
	I1212 20:19:34.123673       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 20:19:34.128933       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:19:34.365926       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:19:35.599844       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:19:35.625205       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:19:35.636669       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 20:19:47.467314       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 20:19:48.174131       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [e153eaaaeac71226fe3033a5cd50190bdc190a6d1bc537f27b10ba2d4b5ebb09] <==
	I1212 20:19:48.593927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="249.07µs"
	I1212 20:19:54.895622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.171µs"
	I1212 20:19:54.924807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.104µs"
	I1212 20:19:56.968358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.353299ms"
	I1212 20:19:56.968675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.19µs"
	I1212 20:19:57.229319       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 20:20:33.045641       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-562818-m02\" does not exist"
	I1212 20:20:33.062954       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-562818-m02" podCIDRs=["10.244.1.0/24"]
	I1212 20:20:33.078316       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sxw8h"
	I1212 20:20:33.078507       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cmz7d"
	I1212 20:20:37.236150       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-562818-m02"
	I1212 20:20:37.236280       1 event.go:307] "Event occurred" object="multinode-562818-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-562818-m02 event: Registered Node multinode-562818-m02 in Controller"
	I1212 20:20:41.196287       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m02"
	I1212 20:20:43.475605       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 20:20:43.492963       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-vbpn5"
	I1212 20:20:43.501111       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9wvsx"
	I1212 20:20:43.542838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="66.822841ms"
	I1212 20:20:43.569240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.269667ms"
	I1212 20:20:43.569504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="129.188µs"
	I1212 20:20:43.577112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="106.345µs"
	I1212 20:20:43.579535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="269.357µs"
	I1212 20:20:45.935282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.906644ms"
	I1212 20:20:45.936143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.42µs"
	I1212 20:20:46.129760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.43176ms"
	I1212 20:20:46.130002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="168.215µs"
	
	
	==> kube-proxy [b7744a262602873099b644550a73caa775fb3e7851c866ebdbb54f6a39c00764] <==
	I1212 20:19:50.881430       1 server_others.go:69] "Using iptables proxy"
	I1212 20:19:50.899336       1 node.go:141] Successfully retrieved node IP: 192.168.39.77
	I1212 20:19:50.966526       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:19:50.966569       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:19:50.969900       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:19:50.969963       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:19:50.970148       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:19:50.970180       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:19:50.971127       1 config.go:188] "Starting service config controller"
	I1212 20:19:50.971178       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:19:50.971200       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:19:50.971204       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:19:50.971760       1 config.go:315] "Starting node config controller"
	I1212 20:19:50.971796       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:19:51.071662       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:19:51.071781       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:19:51.072093       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d417b39813c3bebd3548bf5ea5e40824b42c0e084cfe8ea373ace50045e8e0c5] <==
	W1212 20:19:32.415649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 20:19:32.415722       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 20:19:32.415816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 20:19:32.415846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 20:19:32.415910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:19:32.415938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 20:19:33.319758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 20:19:33.319818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 20:19:33.327655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 20:19:33.327709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 20:19:33.391155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 20:19:33.391208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 20:19:33.418978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 20:19:33.419150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 20:19:33.432638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 20:19:33.432736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 20:19:33.444055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:19:33.444108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 20:19:33.524369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 20:19:33.524470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 20:19:33.525245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 20:19:33.525291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 20:19:33.925734       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 20:19:33.925811       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 20:19:36.494504       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:19:03 UTC, ends at Tue 2023-12-12 20:20:50 UTC. --
	Dec 12 20:19:48 multinode-562818 kubelet[1268]: I1212 20:19:48.422894    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bcd718f-0c7c-461a-895e-44a0c1d566fd-xtables-lock\") pod \"kube-proxy-4rrmn\" (UID: \"2bcd718f-0c7c-461a-895e-44a0c1d566fd\") " pod="kube-system/kube-proxy-4rrmn"
	Dec 12 20:19:49 multinode-562818 kubelet[1268]: E1212 20:19:49.443860    1268 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 20:19:49 multinode-562818 kubelet[1268]: E1212 20:19:49.443936    1268 projected.go:198] Error preparing data for projected volume kube-api-access-bdt9d for pod kube-system/kindnet-24p9c: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 20:19:49 multinode-562818 kubelet[1268]: E1212 20:19:49.444133    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e80eb9ab-2919-4be1-890d-34c26202f7fc-kube-api-access-bdt9d podName:e80eb9ab-2919-4be1-890d-34c26202f7fc nodeName:}" failed. No retries permitted until 2023-12-12 20:19:49.944012083 +0000 UTC m=+14.394157859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bdt9d" (UniqueName: "kubernetes.io/projected/e80eb9ab-2919-4be1-890d-34c26202f7fc-kube-api-access-bdt9d") pod "kindnet-24p9c" (UID: "e80eb9ab-2919-4be1-890d-34c26202f7fc") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 20:19:49 multinode-562818 kubelet[1268]: E1212 20:19:49.543202    1268 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 20:19:49 multinode-562818 kubelet[1268]: E1212 20:19:49.543234    1268 projected.go:198] Error preparing data for projected volume kube-api-access-ldfj7 for pod kube-system/kube-proxy-4rrmn: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 20:19:49 multinode-562818 kubelet[1268]: E1212 20:19:49.543290    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bcd718f-0c7c-461a-895e-44a0c1d566fd-kube-api-access-ldfj7 podName:2bcd718f-0c7c-461a-895e-44a0c1d566fd nodeName:}" failed. No retries permitted until 2023-12-12 20:19:50.04327534 +0000 UTC m=+14.493421133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ldfj7" (UniqueName: "kubernetes.io/projected/2bcd718f-0c7c-461a-895e-44a0c1d566fd-kube-api-access-ldfj7") pod "kube-proxy-4rrmn" (UID: "2bcd718f-0c7c-461a-895e-44a0c1d566fd") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 20:19:53 multinode-562818 kubelet[1268]: I1212 20:19:53.916025    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4rrmn" podStartSLOduration=5.915978777 podCreationTimestamp="2023-12-12 20:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:19:50.90553239 +0000 UTC m=+15.355678186" watchObservedRunningTime="2023-12-12 20:19:53.915978777 +0000 UTC m=+18.366124552"
	Dec 12 20:19:54 multinode-562818 kubelet[1268]: I1212 20:19:54.857888    1268 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 20:19:54 multinode-562818 kubelet[1268]: I1212 20:19:54.894793    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-24p9c" podStartSLOduration=6.894734165 podCreationTimestamp="2023-12-12 20:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:19:53.918724812 +0000 UTC m=+18.368870607" watchObservedRunningTime="2023-12-12 20:19:54.894734165 +0000 UTC m=+19.344879955"
	Dec 12 20:19:54 multinode-562818 kubelet[1268]: I1212 20:19:54.895166    1268 topology_manager.go:215] "Topology Admit Handler" podUID="e77852fc-eb8a-4027-98e1-070b4ca43f54" podNamespace="kube-system" podName="coredns-5dd5756b68-689lp"
	Dec 12 20:19:54 multinode-562818 kubelet[1268]: I1212 20:19:54.900812    1268 topology_manager.go:215] "Topology Admit Handler" podUID="9efe55ce-d87d-4074-9983-d880908d6d3d" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 20:19:55 multinode-562818 kubelet[1268]: I1212 20:19:55.067237    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e77852fc-eb8a-4027-98e1-070b4ca43f54-config-volume\") pod \"coredns-5dd5756b68-689lp\" (UID: \"e77852fc-eb8a-4027-98e1-070b4ca43f54\") " pod="kube-system/coredns-5dd5756b68-689lp"
	Dec 12 20:19:55 multinode-562818 kubelet[1268]: I1212 20:19:55.067336    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jj6\" (UniqueName: \"kubernetes.io/projected/e77852fc-eb8a-4027-98e1-070b4ca43f54-kube-api-access-79jj6\") pod \"coredns-5dd5756b68-689lp\" (UID: \"e77852fc-eb8a-4027-98e1-070b4ca43f54\") " pod="kube-system/coredns-5dd5756b68-689lp"
	Dec 12 20:19:55 multinode-562818 kubelet[1268]: I1212 20:19:55.067362    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9efe55ce-d87d-4074-9983-d880908d6d3d-tmp\") pod \"storage-provisioner\" (UID: \"9efe55ce-d87d-4074-9983-d880908d6d3d\") " pod="kube-system/storage-provisioner"
	Dec 12 20:19:55 multinode-562818 kubelet[1268]: I1212 20:19:55.067453    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmb66\" (UniqueName: \"kubernetes.io/projected/9efe55ce-d87d-4074-9983-d880908d6d3d-kube-api-access-qmb66\") pod \"storage-provisioner\" (UID: \"9efe55ce-d87d-4074-9983-d880908d6d3d\") " pod="kube-system/storage-provisioner"
	Dec 12 20:19:56 multinode-562818 kubelet[1268]: I1212 20:19:56.951259    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.951220648 podCreationTimestamp="2023-12-12 20:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:19:56.936798201 +0000 UTC m=+21.386943997" watchObservedRunningTime="2023-12-12 20:19:56.951220648 +0000 UTC m=+21.401366444"
	Dec 12 20:20:35 multinode-562818 kubelet[1268]: E1212 20:20:35.873025    1268 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 20:20:35 multinode-562818 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 20:20:35 multinode-562818 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 20:20:35 multinode-562818 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 20:20:43 multinode-562818 kubelet[1268]: I1212 20:20:43.520517    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-689lp" podStartSLOduration=55.520355875 podCreationTimestamp="2023-12-12 20:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:19:56.951596196 +0000 UTC m=+21.401741991" watchObservedRunningTime="2023-12-12 20:20:43.520355875 +0000 UTC m=+67.970501670"
	Dec 12 20:20:43 multinode-562818 kubelet[1268]: I1212 20:20:43.520985    1268 topology_manager.go:215] "Topology Admit Handler" podUID="59edc235-8efb-4eda-85e5-8ef3403bf5f3" podNamespace="default" podName="busybox-5bc68d56bd-9wvsx"
	Dec 12 20:20:43 multinode-562818 kubelet[1268]: I1212 20:20:43.690324    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxh85\" (UniqueName: \"kubernetes.io/projected/59edc235-8efb-4eda-85e5-8ef3403bf5f3-kube-api-access-jxh85\") pod \"busybox-5bc68d56bd-9wvsx\" (UID: \"59edc235-8efb-4eda-85e5-8ef3403bf5f3\") " pod="default/busybox-5bc68d56bd-9wvsx"
	Dec 12 20:20:46 multinode-562818 kubelet[1268]: I1212 20:20:46.127576    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-9wvsx" podStartSLOduration=2.186035648 podCreationTimestamp="2023-12-12 20:20:43 +0000 UTC" firstStartedPulling="2023-12-12 20:20:44.409441703 +0000 UTC m=+68.859587491" lastFinishedPulling="2023-12-12 20:20:45.350868095 +0000 UTC m=+69.801013875" observedRunningTime="2023-12-12 20:20:46.126217273 +0000 UTC m=+70.576363052" watchObservedRunningTime="2023-12-12 20:20:46.127462032 +0000 UTC m=+70.577607829"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-562818 -n multinode-562818
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-562818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (686.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-562818
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-562818
E1212 20:22:16.567652   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:23:56.433738   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-562818: exit status 82 (2m1.328014367s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-562818"  ...
	* Stopping node "multinode-562818"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-562818" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-562818 --wait=true -v=8 --alsologtostderr
E1212 20:24:39.384949   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:26:02.519624   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:26:48.881074   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:28:56.434204   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:29:39.385175   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:30:19.479410   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:31:48.881458   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:33:11.928448   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-562818 --wait=true -v=8 --alsologtostderr: (9m21.753117675s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-562818
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-562818 -n multinode-562818
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-562818 logs -n 25: (1.595494873s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m02:/home/docker/cp-test.txt                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1880154385/001/cp-test_multinode-562818-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m02:/home/docker/cp-test.txt                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818:/home/docker/cp-test_multinode-562818-m02_multinode-562818.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n multinode-562818 sudo cat                                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /home/docker/cp-test_multinode-562818-m02_multinode-562818.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m02:/home/docker/cp-test.txt                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03:/home/docker/cp-test_multinode-562818-m02_multinode-562818-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n multinode-562818-m03 sudo cat                                   | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /home/docker/cp-test_multinode-562818-m02_multinode-562818-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp testdata/cp-test.txt                                                | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1880154385/001/cp-test_multinode-562818-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818:/home/docker/cp-test_multinode-562818-m03_multinode-562818.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n multinode-562818 sudo cat                                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /home/docker/cp-test_multinode-562818-m03_multinode-562818.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt                       | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m02:/home/docker/cp-test_multinode-562818-m03_multinode-562818-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n multinode-562818-m02 sudo cat                                   | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /home/docker/cp-test_multinode-562818-m03_multinode-562818-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-562818 node stop m03                                                          | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	| node    | multinode-562818 node start                                                             | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:22 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-562818                                                                | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:22 UTC |                     |
	| stop    | -p multinode-562818                                                                     | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:22 UTC |                     |
	| start   | -p multinode-562818                                                                     | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:24 UTC | 12 Dec 23 20:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-562818                                                                | multinode-562818 | jenkins | v1.32.0 | 12 Dec 23 20:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 20:24:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:24:17.414703   33042 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:24:17.414982   33042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:24:17.414991   33042 out.go:309] Setting ErrFile to fd 2...
	I1212 20:24:17.414996   33042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:24:17.415182   33042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:24:17.415756   33042 out.go:303] Setting JSON to false
	I1212 20:24:17.416706   33042 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4011,"bootTime":1702408646,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:24:17.416767   33042 start.go:138] virtualization: kvm guest
	I1212 20:24:17.419248   33042 out.go:177] * [multinode-562818] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:24:17.421222   33042 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:24:17.421293   33042 notify.go:220] Checking for updates...
	I1212 20:24:17.422778   33042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:24:17.424502   33042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:24:17.426213   33042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:24:17.427714   33042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:24:17.429161   33042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:24:17.431107   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:24:17.431211   33042 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:24:17.431704   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:24:17.431750   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:24:17.446105   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I1212 20:24:17.446456   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:24:17.446952   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:24:17.446974   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:24:17.447336   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:24:17.447518   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:24:17.483149   33042 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 20:24:17.484727   33042 start.go:298] selected driver: kvm2
	I1212 20:24:17.484739   33042 start.go:902] validating driver "kvm2" against &{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:24:17.484893   33042 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:24:17.485203   33042 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:24:17.485287   33042 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 20:24:17.499780   33042 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 20:24:17.500455   33042 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:24:17.500530   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:24:17.500545   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:24:17.500559   33042 start_flags.go:323] config:
	{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:24:17.500807   33042 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:24:17.502795   33042 out.go:177] * Starting control plane node multinode-562818 in cluster multinode-562818
	I1212 20:24:17.504235   33042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:24:17.504280   33042 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 20:24:17.504286   33042 cache.go:56] Caching tarball of preloaded images
	I1212 20:24:17.504359   33042 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:24:17.504370   33042 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 20:24:17.504490   33042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:24:17.504675   33042 start.go:365] acquiring machines lock for multinode-562818: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:24:17.504719   33042 start.go:369] acquired machines lock for "multinode-562818" in 25.171µs
	I1212 20:24:17.504735   33042 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:24:17.504742   33042 fix.go:54] fixHost starting: 
	I1212 20:24:17.504979   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:24:17.505010   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:24:17.520696   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I1212 20:24:17.521168   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:24:17.521645   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:24:17.521667   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:24:17.521973   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:24:17.522198   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:24:17.522360   33042 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:24:17.523856   33042 fix.go:102] recreateIfNeeded on multinode-562818: state=Running err=<nil>
	W1212 20:24:17.523874   33042 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:24:17.526028   33042 out.go:177] * Updating the running kvm2 "multinode-562818" VM ...
	I1212 20:24:17.527581   33042 machine.go:88] provisioning docker machine ...
	I1212 20:24:17.527606   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:24:17.527818   33042 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:24:17.527979   33042 buildroot.go:166] provisioning hostname "multinode-562818"
	I1212 20:24:17.528001   33042 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:24:17.528129   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:24:17.530406   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:24:17.530757   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:24:17.530792   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:24:17.530970   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:24:17.531124   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:24:17.531299   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:24:17.531403   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:24:17.531557   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:24:17.531899   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:24:17.531915   33042 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-562818 && echo "multinode-562818" | sudo tee /etc/hostname
	I1212 20:24:35.939505   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:24:42.019537   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:24:45.091499   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:24:51.171570   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:24:54.243554   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:00.323553   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:03.395534   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:09.475555   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:12.547577   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:18.627593   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:21.699593   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:27.779527   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:30.851482   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:36.931504   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:40.003557   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:46.083531   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:49.155474   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:55.235477   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:25:58.307465   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:04.387529   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:07.459573   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:13.539490   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:16.611522   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:22.691632   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:25.763470   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:31.843550   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:34.915533   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:40.995506   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:44.067559   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:50.147513   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:53.219560   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:26:59.299539   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:02.371521   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:08.451503   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:11.523498   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:17.603547   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:20.675530   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:26.755553   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:29.827507   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:35.907526   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:38.979468   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:45.059537   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:48.131573   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:54.215503   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:27:57.283501   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:03.363506   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:06.435497   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:12.515505   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:15.587502   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:21.667522   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:24.739563   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:30.819539   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:33.891551   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:39.971512   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:43.043469   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:49.123529   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:52.195508   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:28:58.275550   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:29:01.347542   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:29:07.427514   33042 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.77:22: connect: no route to host
	I1212 20:29:10.429618   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:29:10.429672   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:10.431758   33042 machine.go:91] provisioned docker machine in 4m52.904151315s
	I1212 20:29:10.431806   33042 fix.go:56] fixHost completed within 4m52.927064287s
	I1212 20:29:10.431811   33042 start.go:83] releasing machines lock for "multinode-562818", held for 4m52.927084091s
	W1212 20:29:10.431825   33042 start.go:694] error starting host: provision: host is not running
	W1212 20:29:10.431918   33042 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 20:29:10.431929   33042 start.go:709] Will try again in 5 seconds ...
	I1212 20:29:15.433896   33042 start.go:365] acquiring machines lock for multinode-562818: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:29:15.434047   33042 start.go:369] acquired machines lock for "multinode-562818" in 108.848µs
	I1212 20:29:15.434071   33042 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:29:15.434076   33042 fix.go:54] fixHost starting: 
	I1212 20:29:15.434426   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:29:15.434447   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:29:15.449315   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I1212 20:29:15.449728   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:29:15.450123   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:29:15.450146   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:29:15.450554   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:29:15.450763   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:15.450941   33042 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:29:15.452467   33042 fix.go:102] recreateIfNeeded on multinode-562818: state=Stopped err=<nil>
	I1212 20:29:15.452494   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	W1212 20:29:15.452668   33042 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:29:15.455787   33042 out.go:177] * Restarting existing kvm2 VM for "multinode-562818" ...
	I1212 20:29:15.457203   33042 main.go:141] libmachine: (multinode-562818) Calling .Start
	I1212 20:29:15.457368   33042 main.go:141] libmachine: (multinode-562818) Ensuring networks are active...
	I1212 20:29:15.458168   33042 main.go:141] libmachine: (multinode-562818) Ensuring network default is active
	I1212 20:29:15.458501   33042 main.go:141] libmachine: (multinode-562818) Ensuring network mk-multinode-562818 is active
	I1212 20:29:15.458832   33042 main.go:141] libmachine: (multinode-562818) Getting domain xml...
	I1212 20:29:15.459478   33042 main.go:141] libmachine: (multinode-562818) Creating domain...
	I1212 20:29:16.692605   33042 main.go:141] libmachine: (multinode-562818) Waiting to get IP...
	I1212 20:29:16.693485   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:16.693879   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:16.693957   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:16.693852   33812 retry.go:31] will retry after 260.975245ms: waiting for machine to come up
	I1212 20:29:16.956415   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:16.956919   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:16.956948   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:16.956890   33812 retry.go:31] will retry after 332.806636ms: waiting for machine to come up
	I1212 20:29:17.291543   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:17.291867   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:17.291896   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:17.291830   33812 retry.go:31] will retry after 479.212087ms: waiting for machine to come up
	I1212 20:29:17.772433   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:17.772902   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:17.772923   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:17.772856   33812 retry.go:31] will retry after 379.712058ms: waiting for machine to come up
	I1212 20:29:18.154599   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:18.155153   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:18.155179   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:18.155092   33812 retry.go:31] will retry after 498.95609ms: waiting for machine to come up
	I1212 20:29:18.655785   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:18.656204   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:18.656238   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:18.656144   33812 retry.go:31] will retry after 706.051492ms: waiting for machine to come up
	I1212 20:29:19.363896   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:19.364315   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:19.364347   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:19.364262   33812 retry.go:31] will retry after 1.097187006s: waiting for machine to come up
	I1212 20:29:20.462874   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:20.463412   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:20.463444   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:20.463365   33812 retry.go:31] will retry after 1.195776333s: waiting for machine to come up
	I1212 20:29:21.660652   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:21.660972   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:21.660996   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:21.660922   33812 retry.go:31] will retry after 1.727958845s: waiting for machine to come up
	I1212 20:29:23.390880   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:23.391320   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:23.391343   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:23.391288   33812 retry.go:31] will retry after 2.249857991s: waiting for machine to come up
	I1212 20:29:25.643522   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:25.644041   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:25.644067   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:25.643978   33812 retry.go:31] will retry after 1.82794059s: waiting for machine to come up
	I1212 20:29:27.473169   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:27.473663   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:27.473694   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:27.473601   33812 retry.go:31] will retry after 2.546861062s: waiting for machine to come up
	I1212 20:29:30.022492   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:30.022897   33042 main.go:141] libmachine: (multinode-562818) DBG | unable to find current IP address of domain multinode-562818 in network mk-multinode-562818
	I1212 20:29:30.022927   33042 main.go:141] libmachine: (multinode-562818) DBG | I1212 20:29:30.022853   33812 retry.go:31] will retry after 4.191037933s: waiting for machine to come up
	I1212 20:29:34.218432   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.218869   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has current primary IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.218898   33042 main.go:141] libmachine: (multinode-562818) Found IP for machine: 192.168.39.77
	I1212 20:29:34.218914   33042 main.go:141] libmachine: (multinode-562818) Reserving static IP address...
	I1212 20:29:34.219389   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "multinode-562818", mac: "52:54:00:25:49:23", ip: "192.168.39.77"} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.219417   33042 main.go:141] libmachine: (multinode-562818) Reserved static IP address: 192.168.39.77
	I1212 20:29:34.219439   33042 main.go:141] libmachine: (multinode-562818) DBG | skip adding static IP to network mk-multinode-562818 - found existing host DHCP lease matching {name: "multinode-562818", mac: "52:54:00:25:49:23", ip: "192.168.39.77"}
	I1212 20:29:34.219465   33042 main.go:141] libmachine: (multinode-562818) DBG | Getting to WaitForSSH function...
	I1212 20:29:34.219475   33042 main.go:141] libmachine: (multinode-562818) Waiting for SSH to be available...
	I1212 20:29:34.221433   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.221802   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.221830   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.221949   33042 main.go:141] libmachine: (multinode-562818) DBG | Using SSH client type: external
	I1212 20:29:34.221975   33042 main.go:141] libmachine: (multinode-562818) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa (-rw-------)
	I1212 20:29:34.222009   33042 main.go:141] libmachine: (multinode-562818) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 20:29:34.222024   33042 main.go:141] libmachine: (multinode-562818) DBG | About to run SSH command:
	I1212 20:29:34.222050   33042 main.go:141] libmachine: (multinode-562818) DBG | exit 0
	I1212 20:29:34.311079   33042 main.go:141] libmachine: (multinode-562818) DBG | SSH cmd err, output: <nil>: 
	I1212 20:29:34.311395   33042 main.go:141] libmachine: (multinode-562818) Calling .GetConfigRaw
	I1212 20:29:34.312136   33042 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:29:34.314598   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.315013   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.315044   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.315376   33042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:29:34.315672   33042 machine.go:88] provisioning docker machine ...
	I1212 20:29:34.315696   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:34.315906   33042 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:29:34.316091   33042 buildroot.go:166] provisioning hostname "multinode-562818"
	I1212 20:29:34.316108   33042 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:29:34.316234   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:34.318090   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.318443   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.318464   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.318619   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:34.318788   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.318942   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.319081   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:34.319291   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:29:34.319667   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:29:34.319681   33042 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-562818 && echo "multinode-562818" | sudo tee /etc/hostname
	I1212 20:29:34.447613   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-562818
	
	I1212 20:29:34.447643   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:34.450166   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.450538   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.450581   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.450711   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:34.450945   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.451096   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.451269   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:34.451403   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:29:34.451755   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:29:34.451773   33042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-562818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-562818/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-562818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:29:34.574807   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:29:34.574848   33042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:29:34.574886   33042 buildroot.go:174] setting up certificates
	I1212 20:29:34.574895   33042 provision.go:83] configureAuth start
	I1212 20:29:34.574904   33042 main.go:141] libmachine: (multinode-562818) Calling .GetMachineName
	I1212 20:29:34.575145   33042 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:29:34.577710   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.577998   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.578026   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.578150   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:34.580255   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.580575   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.580613   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.580721   33042 provision.go:138] copyHostCerts
	I1212 20:29:34.580750   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:29:34.580784   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:29:34.580803   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:29:34.580889   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:29:34.580988   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:29:34.581011   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:29:34.581021   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:29:34.581060   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:29:34.581116   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:29:34.581141   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:29:34.581150   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:29:34.581183   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:29:34.581240   33042 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.multinode-562818 san=[192.168.39.77 192.168.39.77 localhost 127.0.0.1 minikube multinode-562818]
	I1212 20:29:34.818986   33042 provision.go:172] copyRemoteCerts
	I1212 20:29:34.819055   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:29:34.819088   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:34.821819   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.822167   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.822194   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.822375   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:34.822554   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.822692   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:34.822807   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:29:34.914515   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:29:34.914581   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:29:34.939099   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:29:34.939181   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:29:34.963496   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:29:34.963561   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 20:29:34.987313   33042 provision.go:86] duration metric: configureAuth took 412.40763ms
	I1212 20:29:34.987339   33042 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:29:34.987605   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:29:34.987688   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:34.990219   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.990604   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:34.990638   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:34.990835   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:34.991081   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.991293   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:34.991439   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:34.991633   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:29:34.991952   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:29:34.991971   33042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:29:35.311657   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:29:35.311712   33042 machine.go:91] provisioned docker machine in 996.025048ms
	I1212 20:29:35.311724   33042 start.go:300] post-start starting for "multinode-562818" (driver="kvm2")
	I1212 20:29:35.311737   33042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:29:35.311758   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:35.312112   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:29:35.312145   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:35.314788   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.315183   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:35.315211   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.315310   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:35.315507   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:35.315689   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:35.315837   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:29:35.401943   33042 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:29:35.406237   33042 command_runner.go:130] > NAME=Buildroot
	I1212 20:29:35.406261   33042 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 20:29:35.406268   33042 command_runner.go:130] > ID=buildroot
	I1212 20:29:35.406276   33042 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 20:29:35.406285   33042 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 20:29:35.406332   33042 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:29:35.406357   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:29:35.406441   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:29:35.406540   33042 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:29:35.406553   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /etc/ssl/certs/164562.pem
	I1212 20:29:35.406674   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:29:35.415098   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:29:35.438784   33042 start.go:303] post-start completed in 127.033723ms
	I1212 20:29:35.438808   33042 fix.go:56] fixHost completed within 20.004731609s
	I1212 20:29:35.438826   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:35.441348   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.441617   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:35.441671   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.441807   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:35.442006   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:35.442192   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:35.442338   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:35.442478   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:29:35.442835   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1212 20:29:35.442848   33042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:29:35.556393   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412975.504685665
	
	I1212 20:29:35.556421   33042 fix.go:206] guest clock: 1702412975.504685665
	I1212 20:29:35.556432   33042 fix.go:219] Guest: 2023-12-12 20:29:35.504685665 +0000 UTC Remote: 2023-12-12 20:29:35.438811305 +0000 UTC m=+318.075862819 (delta=65.87436ms)
	I1212 20:29:35.556482   33042 fix.go:190] guest clock delta is within tolerance: 65.87436ms
	I1212 20:29:35.556491   33042 start.go:83] releasing machines lock for "multinode-562818", held for 20.122431509s
	I1212 20:29:35.556511   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:35.556776   33042 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:29:35.560480   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.561005   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:35.561037   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.561220   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:35.561974   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:35.562175   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:29:35.562230   33042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:29:35.562283   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:35.562378   33042 ssh_runner.go:195] Run: cat /version.json
	I1212 20:29:35.562415   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:29:35.565128   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.565431   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.565556   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:35.565585   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.565748   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:35.565892   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:35.565912   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:35.565932   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:35.566113   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:29:35.566123   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:35.566268   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:29:35.566304   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:29:35.566398   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:29:35.566508   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:29:35.647981   33042 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 20:29:35.649015   33042 ssh_runner.go:195] Run: systemctl --version
	I1212 20:29:35.677043   33042 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:29:35.677095   33042 command_runner.go:130] > systemd 247 (247)
	I1212 20:29:35.677137   33042 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 20:29:35.677212   33042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:29:35.822699   33042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:29:35.829091   33042 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:29:35.829453   33042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:29:35.829523   33042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:29:35.844740   33042 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 20:29:35.845008   33042 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:29:35.845023   33042 start.go:475] detecting cgroup driver to use...
	I1212 20:29:35.845085   33042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:29:35.858594   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:29:35.870551   33042 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:29:35.870639   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:29:35.883001   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:29:35.895357   33042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:29:35.910098   33042 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 20:29:35.996979   33042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:29:36.116266   33042 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 20:29:36.116314   33042 docker.go:219] disabling docker service ...
	I1212 20:29:36.116374   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:29:36.130086   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:29:36.141233   33042 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 20:29:36.142036   33042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:29:36.263410   33042 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 20:29:36.263502   33042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:29:36.384145   33042 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 20:29:36.384171   33042 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 20:29:36.384229   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:29:36.397583   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:29:36.413909   33042 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:29:36.414263   33042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 20:29:36.414365   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:29:36.423307   33042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:29:36.423368   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:29:36.432219   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:29:36.441185   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:29:36.450076   33042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:29:36.459467   33042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:29:36.467162   33042 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:29:36.467230   33042 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:29:36.467297   33042 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 20:29:36.480613   33042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:29:36.489340   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:29:36.599608   33042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:29:36.763773   33042 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:29:36.763869   33042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:29:36.771563   33042 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:29:36.771588   33042 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:29:36.771609   33042 command_runner.go:130] > Device: 16h/22d	Inode: 750         Links: 1
	I1212 20:29:36.771616   33042 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:29:36.771621   33042 command_runner.go:130] > Access: 2023-12-12 20:29:36.698313795 +0000
	I1212 20:29:36.771627   33042 command_runner.go:130] > Modify: 2023-12-12 20:29:36.698313795 +0000
	I1212 20:29:36.771635   33042 command_runner.go:130] > Change: 2023-12-12 20:29:36.698313795 +0000
	I1212 20:29:36.771639   33042 command_runner.go:130] >  Birth: -
	I1212 20:29:36.771908   33042 start.go:543] Will wait 60s for crictl version
	I1212 20:29:36.771969   33042 ssh_runner.go:195] Run: which crictl
	I1212 20:29:36.775355   33042 command_runner.go:130] > /usr/bin/crictl
	I1212 20:29:36.775617   33042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:29:36.811216   33042 command_runner.go:130] > Version:  0.1.0
	I1212 20:29:36.811250   33042 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:29:36.811255   33042 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 20:29:36.811261   33042 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:29:36.812908   33042 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:29:36.812977   33042 ssh_runner.go:195] Run: crio --version
	I1212 20:29:36.856618   33042 command_runner.go:130] > crio version 1.24.1
	I1212 20:29:36.856645   33042 command_runner.go:130] > Version:          1.24.1
	I1212 20:29:36.856655   33042 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:29:36.856662   33042 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:29:36.856671   33042 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:29:36.856679   33042 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:29:36.856685   33042 command_runner.go:130] > Compiler:         gc
	I1212 20:29:36.856692   33042 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:29:36.856700   33042 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:29:36.856712   33042 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:29:36.856728   33042 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:29:36.856743   33042 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:29:36.856835   33042 ssh_runner.go:195] Run: crio --version
	I1212 20:29:36.902415   33042 command_runner.go:130] > crio version 1.24.1
	I1212 20:29:36.902436   33042 command_runner.go:130] > Version:          1.24.1
	I1212 20:29:36.902447   33042 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:29:36.902453   33042 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:29:36.902467   33042 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:29:36.902474   33042 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:29:36.902480   33042 command_runner.go:130] > Compiler:         gc
	I1212 20:29:36.902486   33042 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:29:36.902494   33042 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:29:36.902505   33042 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:29:36.902513   33042 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:29:36.902521   33042 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:29:36.904918   33042 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 20:29:36.906289   33042 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:29:36.909140   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:36.909554   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:29:36.909590   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:29:36.909842   33042 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:29:36.914098   33042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:29:36.927326   33042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:29:36.927393   33042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:29:36.964475   33042 command_runner.go:130] > {
	I1212 20:29:36.964494   33042 command_runner.go:130] >   "images": [
	I1212 20:29:36.964498   33042 command_runner.go:130] >     {
	I1212 20:29:36.964507   33042 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 20:29:36.964511   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:36.964517   33042 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 20:29:36.964520   33042 command_runner.go:130] >       ],
	I1212 20:29:36.964529   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:36.964546   33042 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 20:29:36.964561   33042 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 20:29:36.964571   33042 command_runner.go:130] >       ],
	I1212 20:29:36.964581   33042 command_runner.go:130] >       "size": "750414",
	I1212 20:29:36.964589   33042 command_runner.go:130] >       "uid": {
	I1212 20:29:36.964597   33042 command_runner.go:130] >         "value": "65535"
	I1212 20:29:36.964607   33042 command_runner.go:130] >       },
	I1212 20:29:36.964614   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:36.964621   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:36.964627   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:36.964633   33042 command_runner.go:130] >     }
	I1212 20:29:36.964639   33042 command_runner.go:130] >   ]
	I1212 20:29:36.964645   33042 command_runner.go:130] > }
	I1212 20:29:36.964781   33042 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 20:29:36.964858   33042 ssh_runner.go:195] Run: which lz4
	I1212 20:29:36.968434   33042 command_runner.go:130] > /usr/bin/lz4
	I1212 20:29:36.968503   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 20:29:36.968603   33042 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 20:29:36.972761   33042 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:29:36.972811   33042 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:29:36.972835   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 20:29:39.148831   33042 crio.go:444] Took 2.180252 seconds to copy over tarball
	I1212 20:29:39.148896   33042 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:29:42.029233   33042 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.880305875s)
	I1212 20:29:42.029263   33042 crio.go:451] Took 2.880409 seconds to extract the tarball
	I1212 20:29:42.029275   33042 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:29:42.070142   33042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:29:42.119969   33042 command_runner.go:130] > {
	I1212 20:29:42.119993   33042 command_runner.go:130] >   "images": [
	I1212 20:29:42.120000   33042 command_runner.go:130] >     {
	I1212 20:29:42.120012   33042 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 20:29:42.120019   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120027   33042 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 20:29:42.120032   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120038   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120050   33042 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 20:29:42.120062   33042 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 20:29:42.120073   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120080   33042 command_runner.go:130] >       "size": "65258016",
	I1212 20:29:42.120097   33042 command_runner.go:130] >       "uid": null,
	I1212 20:29:42.120107   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120117   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120126   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120132   33042 command_runner.go:130] >     },
	I1212 20:29:42.120138   33042 command_runner.go:130] >     {
	I1212 20:29:42.120156   33042 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 20:29:42.120166   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120174   33042 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:29:42.120180   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120190   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120202   33042 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 20:29:42.120218   33042 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 20:29:42.120228   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120239   33042 command_runner.go:130] >       "size": "31470524",
	I1212 20:29:42.120249   33042 command_runner.go:130] >       "uid": null,
	I1212 20:29:42.120256   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120265   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120277   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120286   33042 command_runner.go:130] >     },
	I1212 20:29:42.120293   33042 command_runner.go:130] >     {
	I1212 20:29:42.120305   33042 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 20:29:42.120315   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120324   33042 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 20:29:42.120332   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120339   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120354   33042 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 20:29:42.120365   33042 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 20:29:42.120372   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120376   33042 command_runner.go:130] >       "size": "53621675",
	I1212 20:29:42.120383   33042 command_runner.go:130] >       "uid": null,
	I1212 20:29:42.120387   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120393   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120398   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120404   33042 command_runner.go:130] >     },
	I1212 20:29:42.120407   33042 command_runner.go:130] >     {
	I1212 20:29:42.120416   33042 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 20:29:42.120424   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120429   33042 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 20:29:42.120435   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120440   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120446   33042 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 20:29:42.120455   33042 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 20:29:42.120466   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120472   33042 command_runner.go:130] >       "size": "295456551",
	I1212 20:29:42.120477   33042 command_runner.go:130] >       "uid": {
	I1212 20:29:42.120484   33042 command_runner.go:130] >         "value": "0"
	I1212 20:29:42.120487   33042 command_runner.go:130] >       },
	I1212 20:29:42.120495   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120499   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120506   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120509   33042 command_runner.go:130] >     },
	I1212 20:29:42.120515   33042 command_runner.go:130] >     {
	I1212 20:29:42.120522   33042 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 20:29:42.120530   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120536   33042 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 20:29:42.120546   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120552   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120560   33042 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 20:29:42.120569   33042 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 20:29:42.120575   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120580   33042 command_runner.go:130] >       "size": "127226832",
	I1212 20:29:42.120586   33042 command_runner.go:130] >       "uid": {
	I1212 20:29:42.120591   33042 command_runner.go:130] >         "value": "0"
	I1212 20:29:42.120596   33042 command_runner.go:130] >       },
	I1212 20:29:42.120601   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120607   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120612   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120618   33042 command_runner.go:130] >     },
	I1212 20:29:42.120623   33042 command_runner.go:130] >     {
	I1212 20:29:42.120632   33042 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 20:29:42.120638   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120646   33042 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 20:29:42.120652   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120657   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120669   33042 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 20:29:42.120680   33042 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 20:29:42.120685   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120690   33042 command_runner.go:130] >       "size": "123261750",
	I1212 20:29:42.120696   33042 command_runner.go:130] >       "uid": {
	I1212 20:29:42.120701   33042 command_runner.go:130] >         "value": "0"
	I1212 20:29:42.120707   33042 command_runner.go:130] >       },
	I1212 20:29:42.120711   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120722   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120727   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120733   33042 command_runner.go:130] >     },
	I1212 20:29:42.120737   33042 command_runner.go:130] >     {
	I1212 20:29:42.120745   33042 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 20:29:42.120752   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120758   33042 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 20:29:42.120766   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120771   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120781   33042 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 20:29:42.120790   33042 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 20:29:42.120796   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120801   33042 command_runner.go:130] >       "size": "74749335",
	I1212 20:29:42.120807   33042 command_runner.go:130] >       "uid": null,
	I1212 20:29:42.120811   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120818   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120822   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120829   33042 command_runner.go:130] >     },
	I1212 20:29:42.120832   33042 command_runner.go:130] >     {
	I1212 20:29:42.120841   33042 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 20:29:42.120848   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120853   33042 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 20:29:42.120859   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120864   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120889   33042 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 20:29:42.120900   33042 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 20:29:42.120907   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120911   33042 command_runner.go:130] >       "size": "61551410",
	I1212 20:29:42.120917   33042 command_runner.go:130] >       "uid": {
	I1212 20:29:42.120922   33042 command_runner.go:130] >         "value": "0"
	I1212 20:29:42.120928   33042 command_runner.go:130] >       },
	I1212 20:29:42.120933   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.120939   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.120943   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.120949   33042 command_runner.go:130] >     },
	I1212 20:29:42.120953   33042 command_runner.go:130] >     {
	I1212 20:29:42.120962   33042 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 20:29:42.120968   33042 command_runner.go:130] >       "repoTags": [
	I1212 20:29:42.120973   33042 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 20:29:42.120979   33042 command_runner.go:130] >       ],
	I1212 20:29:42.120983   33042 command_runner.go:130] >       "repoDigests": [
	I1212 20:29:42.120993   33042 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 20:29:42.121002   33042 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 20:29:42.121011   33042 command_runner.go:130] >       ],
	I1212 20:29:42.121018   33042 command_runner.go:130] >       "size": "750414",
	I1212 20:29:42.121022   33042 command_runner.go:130] >       "uid": {
	I1212 20:29:42.121029   33042 command_runner.go:130] >         "value": "65535"
	I1212 20:29:42.121032   33042 command_runner.go:130] >       },
	I1212 20:29:42.121037   33042 command_runner.go:130] >       "username": "",
	I1212 20:29:42.121044   33042 command_runner.go:130] >       "spec": null,
	I1212 20:29:42.121050   33042 command_runner.go:130] >       "pinned": false
	I1212 20:29:42.121056   33042 command_runner.go:130] >     }
	I1212 20:29:42.121060   33042 command_runner.go:130] >   ]
	I1212 20:29:42.121066   33042 command_runner.go:130] > }
	I1212 20:29:42.121175   33042 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 20:29:42.121189   33042 cache_images.go:84] Images are preloaded, skipping loading
	I1212 20:29:42.121259   33042 ssh_runner.go:195] Run: crio config
	I1212 20:29:42.177309   33042 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:29:42.177333   33042 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:29:42.177340   33042 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:29:42.177344   33042 command_runner.go:130] > #
	I1212 20:29:42.177350   33042 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:29:42.177358   33042 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:29:42.177369   33042 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:29:42.177380   33042 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:29:42.177387   33042 command_runner.go:130] > # reload'.
	I1212 20:29:42.177397   33042 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:29:42.177409   33042 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:29:42.177419   33042 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:29:42.177428   33042 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:29:42.177434   33042 command_runner.go:130] > [crio]
	I1212 20:29:42.177444   33042 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:29:42.177454   33042 command_runner.go:130] > # containers images, in this directory.
	I1212 20:29:42.177470   33042 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 20:29:42.177486   33042 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:29:42.177498   33042 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 20:29:42.177508   33042 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:29:42.177514   33042 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:29:42.177521   33042 command_runner.go:130] > storage_driver = "overlay"
	I1212 20:29:42.177531   33042 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:29:42.177543   33042 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:29:42.177553   33042 command_runner.go:130] > storage_option = [
	I1212 20:29:42.177560   33042 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 20:29:42.177574   33042 command_runner.go:130] > ]
	I1212 20:29:42.177588   33042 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:29:42.177601   33042 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:29:42.177614   33042 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:29:42.177627   33042 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:29:42.177640   33042 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:29:42.177649   33042 command_runner.go:130] > # always happen on a node reboot
	I1212 20:29:42.177681   33042 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:29:42.177699   33042 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:29:42.177709   33042 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:29:42.177726   33042 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:29:42.177757   33042 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 20:29:42.177777   33042 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:29:42.177791   33042 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:29:42.177802   33042 command_runner.go:130] > # internal_wipe = true
	I1212 20:29:42.177811   33042 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:29:42.177825   33042 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:29:42.177834   33042 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:29:42.177846   33042 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:29:42.177859   33042 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:29:42.177869   33042 command_runner.go:130] > [crio.api]
	I1212 20:29:42.177880   33042 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:29:42.177889   33042 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:29:42.177898   33042 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:29:42.177908   33042 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:29:42.177919   33042 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:29:42.177931   33042 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:29:42.177941   33042 command_runner.go:130] > # stream_port = "0"
	I1212 20:29:42.177950   33042 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:29:42.177960   33042 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:29:42.177974   33042 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:29:42.177984   33042 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:29:42.177997   33042 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:29:42.178008   33042 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 20:29:42.178018   33042 command_runner.go:130] > # minutes.
	I1212 20:29:42.178025   33042 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:29:42.178038   33042 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:29:42.178052   33042 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 20:29:42.178061   33042 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:29:42.178076   33042 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:29:42.178089   33042 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:29:42.178097   33042 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 20:29:42.178101   33042 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:29:42.178108   33042 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:29:42.178115   33042 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 20:29:42.178122   33042 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:29:42.178128   33042 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 20:29:42.178147   33042 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:29:42.178155   33042 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:29:42.178159   33042 command_runner.go:130] > [crio.runtime]
	I1212 20:29:42.178167   33042 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:29:42.178173   33042 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:29:42.178182   33042 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:29:42.178192   33042 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:29:42.178225   33042 command_runner.go:130] > # default_ulimits = [
	I1212 20:29:42.178239   33042 command_runner.go:130] > # ]
	I1212 20:29:42.178252   33042 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:29:42.178266   33042 command_runner.go:130] > # no_pivot = false
	I1212 20:29:42.178278   33042 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:29:42.178291   33042 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:29:42.178303   33042 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:29:42.178313   33042 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:29:42.178320   33042 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:29:42.178334   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:29:42.178344   33042 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 20:29:42.178355   33042 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:29:42.178371   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:29:42.178382   33042 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:29:42.178394   33042 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:29:42.178405   33042 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:29:42.178420   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:29:42.178431   33042 command_runner.go:130] > conmon_env = [
	I1212 20:29:42.178444   33042 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 20:29:42.178453   33042 command_runner.go:130] > ]
	I1212 20:29:42.178466   33042 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:29:42.178482   33042 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:29:42.178494   33042 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:29:42.178502   33042 command_runner.go:130] > # default_env = [
	I1212 20:29:42.178534   33042 command_runner.go:130] > # ]
	I1212 20:29:42.178548   33042 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:29:42.178555   33042 command_runner.go:130] > # selinux = false
	I1212 20:29:42.178569   33042 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:29:42.178582   33042 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 20:29:42.178595   33042 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 20:29:42.178605   33042 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:29:42.178614   33042 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 20:29:42.178627   33042 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 20:29:42.178638   33042 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 20:29:42.178646   33042 command_runner.go:130] > # which might increase security.
	I1212 20:29:42.178657   33042 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 20:29:42.178673   33042 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:29:42.178688   33042 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:29:42.178701   33042 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:29:42.178718   33042 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:29:42.178729   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:29:42.178736   33042 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:29:42.178749   33042 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:29:42.178760   33042 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:29:42.178771   33042 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:29:42.178781   33042 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:29:42.178791   33042 command_runner.go:130] > # irqbalance daemon.
	I1212 20:29:42.178800   33042 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:29:42.178814   33042 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:29:42.178825   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:29:42.178836   33042 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:29:42.178845   33042 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:29:42.178856   33042 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:29:42.178868   33042 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:29:42.178896   33042 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:29:42.178908   33042 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:29:42.178917   33042 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:29:42.178936   33042 command_runner.go:130] > # will be added.
	I1212 20:29:42.178948   33042 command_runner.go:130] > # default_capabilities = [
	I1212 20:29:42.178954   33042 command_runner.go:130] > # 	"CHOWN",
	I1212 20:29:42.178965   33042 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:29:42.178971   33042 command_runner.go:130] > # 	"FSETID",
	I1212 20:29:42.178983   33042 command_runner.go:130] > # 	"FOWNER",
	I1212 20:29:42.178991   33042 command_runner.go:130] > # 	"SETGID",
	I1212 20:29:42.179001   33042 command_runner.go:130] > # 	"SETUID",
	I1212 20:29:42.179007   33042 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:29:42.179016   33042 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:29:42.179020   33042 command_runner.go:130] > # 	"KILL",
	I1212 20:29:42.179028   33042 command_runner.go:130] > # ]
	I1212 20:29:42.179038   33042 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:29:42.179051   33042 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:29:42.179059   33042 command_runner.go:130] > # default_sysctls = [
	I1212 20:29:42.179071   33042 command_runner.go:130] > # ]
	I1212 20:29:42.179082   33042 command_runner.go:130] > # List of devices on the host that a
	I1212 20:29:42.179095   33042 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:29:42.179107   33042 command_runner.go:130] > # allowed_devices = [
	I1212 20:29:42.179116   33042 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:29:42.179121   33042 command_runner.go:130] > # ]
	I1212 20:29:42.179126   33042 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:29:42.179142   33042 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:29:42.179155   33042 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:29:42.179193   33042 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:29:42.179204   33042 command_runner.go:130] > # additional_devices = [
	I1212 20:29:42.179210   33042 command_runner.go:130] > # ]
	I1212 20:29:42.179222   33042 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:29:42.179232   33042 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:29:42.179264   33042 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:29:42.179271   33042 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:29:42.179281   33042 command_runner.go:130] > # ]
	I1212 20:29:42.179291   33042 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:29:42.179304   33042 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:29:42.179314   33042 command_runner.go:130] > # Defaults to false.
	I1212 20:29:42.179359   33042 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:29:42.179379   33042 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:29:42.179392   33042 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:29:42.179403   33042 command_runner.go:130] > # hooks_dir = [
	I1212 20:29:42.179411   33042 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:29:42.179420   33042 command_runner.go:130] > # ]
	I1212 20:29:42.179431   33042 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:29:42.179445   33042 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:29:42.179455   33042 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:29:42.179463   33042 command_runner.go:130] > #
	I1212 20:29:42.179474   33042 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:29:42.179488   33042 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:29:42.179500   33042 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:29:42.179508   33042 command_runner.go:130] > #
	I1212 20:29:42.179516   33042 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:29:42.179528   33042 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:29:42.179542   33042 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:29:42.179553   33042 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:29:42.179561   33042 command_runner.go:130] > #
	I1212 20:29:42.179572   33042 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:29:42.179584   33042 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:29:42.179596   33042 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:29:42.179605   33042 command_runner.go:130] > pids_limit = 1024
	I1212 20:29:42.179614   33042 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:29:42.179628   33042 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:29:42.179641   33042 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:29:42.179657   33042 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:29:42.179667   33042 command_runner.go:130] > # log_size_max = -1
	I1212 20:29:42.179679   33042 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 20:29:42.179690   33042 command_runner.go:130] > # log_to_journald = false
	I1212 20:29:42.179700   33042 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:29:42.179711   33042 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:29:42.179724   33042 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:29:42.179734   33042 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:29:42.179746   33042 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:29:42.179754   33042 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:29:42.179767   33042 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:29:42.179776   33042 command_runner.go:130] > # read_only = false
	I1212 20:29:42.179789   33042 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:29:42.179804   33042 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:29:42.179812   33042 command_runner.go:130] > # live configuration reload.
	I1212 20:29:42.179822   33042 command_runner.go:130] > # log_level = "info"
	I1212 20:29:42.179831   33042 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:29:42.179844   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:29:42.179853   33042 command_runner.go:130] > # log_filter = ""
	I1212 20:29:42.179864   33042 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:29:42.179873   33042 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:29:42.179878   33042 command_runner.go:130] > # separated by comma.
	I1212 20:29:42.179888   33042 command_runner.go:130] > # uid_mappings = ""
	I1212 20:29:42.179902   33042 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:29:42.179912   33042 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:29:42.179923   33042 command_runner.go:130] > # separated by comma.
	I1212 20:29:42.179930   33042 command_runner.go:130] > # gid_mappings = ""
	I1212 20:29:42.179942   33042 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:29:42.179976   33042 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:29:42.179993   33042 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:29:42.180005   33042 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:29:42.180017   33042 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:29:42.180031   33042 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:29:42.180045   33042 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:29:42.180057   33042 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:29:42.180071   33042 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:29:42.180084   33042 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:29:42.180094   33042 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:29:42.180104   33042 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:29:42.180114   33042 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:29:42.180127   33042 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:29:42.180136   33042 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:29:42.180148   33042 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:29:42.180157   33042 command_runner.go:130] > drop_infra_ctr = false
	I1212 20:29:42.180169   33042 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:29:42.180182   33042 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:29:42.180195   33042 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:29:42.180209   33042 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:29:42.180225   33042 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:29:42.180239   33042 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:29:42.180247   33042 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:29:42.180261   33042 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:29:42.180272   33042 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 20:29:42.180282   33042 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:29:42.180293   33042 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 20:29:42.180302   33042 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 20:29:42.180307   33042 command_runner.go:130] > # default_runtime = "runc"
	I1212 20:29:42.180314   33042 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:29:42.180322   33042 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:29:42.180350   33042 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 20:29:42.180361   33042 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:29:42.180369   33042 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:29:42.180376   33042 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:29:42.180380   33042 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:29:42.180386   33042 command_runner.go:130] > # ]
	I1212 20:29:42.180394   33042 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:29:42.180403   33042 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:29:42.180411   33042 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 20:29:42.180418   33042 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 20:29:42.180423   33042 command_runner.go:130] > #
	I1212 20:29:42.180428   33042 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 20:29:42.180433   33042 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 20:29:42.180440   33042 command_runner.go:130] > #  runtime_type = "oci"
	I1212 20:29:42.180445   33042 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 20:29:42.180450   33042 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 20:29:42.180454   33042 command_runner.go:130] > #  allowed_annotations = []
	I1212 20:29:42.180460   33042 command_runner.go:130] > # Where:
	I1212 20:29:42.180466   33042 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 20:29:42.180474   33042 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 20:29:42.180480   33042 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:29:42.180489   33042 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:29:42.180496   33042 command_runner.go:130] > #   in $PATH.
	I1212 20:29:42.180504   33042 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 20:29:42.180511   33042 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:29:42.180519   33042 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 20:29:42.180523   33042 command_runner.go:130] > #   state.
	I1212 20:29:42.180531   33042 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:29:42.180537   33042 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:29:42.180546   33042 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:29:42.180551   33042 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:29:42.180564   33042 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:29:42.180578   33042 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:29:42.180586   33042 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:29:42.180593   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:29:42.180602   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:29:42.180627   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:29:42.180633   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:29:42.180640   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:29:42.180647   33042 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:29:42.180653   33042 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:29:42.180668   33042 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 20:29:42.180683   33042 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:29:42.180695   33042 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:29:42.180702   33042 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 20:29:42.180710   33042 command_runner.go:130] > runtime_type = "oci"
	I1212 20:29:42.180714   33042 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:29:42.180721   33042 command_runner.go:130] > runtime_config_path = ""
	I1212 20:29:42.180725   33042 command_runner.go:130] > monitor_path = ""
	I1212 20:29:42.180731   33042 command_runner.go:130] > monitor_cgroup = ""
	I1212 20:29:42.180735   33042 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:29:42.180743   33042 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 20:29:42.180747   33042 command_runner.go:130] > # running containers
	I1212 20:29:42.180754   33042 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 20:29:42.180764   33042 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 20:29:42.180821   33042 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 20:29:42.180830   33042 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 20:29:42.180835   33042 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 20:29:42.180842   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 20:29:42.180847   33042 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 20:29:42.180858   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 20:29:42.180869   33042 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 20:29:42.180880   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 20:29:42.180891   33042 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:29:42.180903   33042 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:29:42.180916   33042 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:29:42.180931   33042 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:29:42.180942   33042 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 20:29:42.180950   33042 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:29:42.180962   33042 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:29:42.180979   33042 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:29:42.180991   33042 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:29:42.181007   33042 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:29:42.181016   33042 command_runner.go:130] > # Example:
	I1212 20:29:42.181027   33042 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:29:42.181037   33042 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:29:42.181048   33042 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:29:42.181058   33042 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:29:42.181067   33042 command_runner.go:130] > # cpuset = 0
	I1212 20:29:42.181077   33042 command_runner.go:130] > # cpushares = "0-1"
	I1212 20:29:42.181087   33042 command_runner.go:130] > # Where:
	I1212 20:29:42.181095   33042 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:29:42.181111   33042 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:29:42.181123   33042 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:29:42.181135   33042 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:29:42.181148   33042 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:29:42.181158   33042 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:29:42.181164   33042 command_runner.go:130] > # 
	I1212 20:29:42.181175   33042 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:29:42.181184   33042 command_runner.go:130] > #
	I1212 20:29:42.181195   33042 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:29:42.181208   33042 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 20:29:42.181222   33042 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 20:29:42.181240   33042 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 20:29:42.181252   33042 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 20:29:42.181258   33042 command_runner.go:130] > [crio.image]
	I1212 20:29:42.181271   33042 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:29:42.181283   33042 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:29:42.181296   33042 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:29:42.181311   33042 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:29:42.181321   33042 command_runner.go:130] > # global_auth_file = ""
	I1212 20:29:42.181333   33042 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:29:42.181345   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:29:42.181360   33042 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 20:29:42.181374   33042 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:29:42.181387   33042 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:29:42.181399   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:29:42.181410   33042 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:29:42.181442   33042 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:29:42.181454   33042 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:29:42.181464   33042 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:29:42.181478   33042 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:29:42.181489   33042 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:29:42.181500   33042 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:29:42.181516   33042 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:29:42.181530   33042 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:29:42.181543   33042 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:29:42.181554   33042 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:29:42.181562   33042 command_runner.go:130] > # signature_policy = ""
	I1212 20:29:42.181571   33042 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:29:42.181582   33042 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:29:42.181589   33042 command_runner.go:130] > # changing them here.
	I1212 20:29:42.181596   33042 command_runner.go:130] > # insecure_registries = [
	I1212 20:29:42.181601   33042 command_runner.go:130] > # ]
	I1212 20:29:42.181614   33042 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:29:42.181623   33042 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:29:42.181630   33042 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:29:42.181641   33042 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:29:42.181646   33042 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:29:42.181656   33042 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:29:42.181666   33042 command_runner.go:130] > # CNI plugins.
	I1212 20:29:42.181673   33042 command_runner.go:130] > [crio.network]
	I1212 20:29:42.181690   33042 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:29:42.181704   33042 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:29:42.181715   33042 command_runner.go:130] > # cni_default_network = ""
	I1212 20:29:42.181727   33042 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:29:42.181736   33042 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:29:42.181746   33042 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:29:42.181756   33042 command_runner.go:130] > # plugin_dirs = [
	I1212 20:29:42.181766   33042 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:29:42.181772   33042 command_runner.go:130] > # ]
	I1212 20:29:42.181785   33042 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:29:42.181795   33042 command_runner.go:130] > [crio.metrics]
	I1212 20:29:42.181806   33042 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:29:42.181816   33042 command_runner.go:130] > enable_metrics = true
	I1212 20:29:42.181826   33042 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:29:42.181835   33042 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:29:42.181846   33042 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:29:42.181859   33042 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:29:42.181875   33042 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:29:42.181888   33042 command_runner.go:130] > # metrics_collectors = [
	I1212 20:29:42.181898   33042 command_runner.go:130] > # 	"operations",
	I1212 20:29:42.181909   33042 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 20:29:42.181920   33042 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 20:29:42.181930   33042 command_runner.go:130] > # 	"operations_errors",
	I1212 20:29:42.181938   33042 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 20:29:42.181946   33042 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 20:29:42.181956   33042 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 20:29:42.181967   33042 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 20:29:42.181974   33042 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 20:29:42.181985   33042 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:29:42.181995   33042 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 20:29:42.182004   33042 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:29:42.182018   33042 command_runner.go:130] > # 	"containers_oom",
	I1212 20:29:42.182028   33042 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:29:42.182036   33042 command_runner.go:130] > # 	"operations_total",
	I1212 20:29:42.182042   33042 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:29:42.182053   33042 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:29:42.182067   33042 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:29:42.182075   33042 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:29:42.182085   33042 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:29:42.182092   33042 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:29:42.182102   33042 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:29:42.182113   33042 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:29:42.182124   33042 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:29:42.182132   33042 command_runner.go:130] > # ]
	I1212 20:29:42.182141   33042 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:29:42.182151   33042 command_runner.go:130] > # metrics_port = 9090
	I1212 20:29:42.182163   33042 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:29:42.182173   33042 command_runner.go:130] > # metrics_socket = ""
	I1212 20:29:42.182182   33042 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:29:42.182194   33042 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:29:42.182205   33042 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:29:42.182216   33042 command_runner.go:130] > # certificate on any modification event.
	I1212 20:29:42.182222   33042 command_runner.go:130] > # metrics_cert = ""
	I1212 20:29:42.182237   33042 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:29:42.182253   33042 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:29:42.182264   33042 command_runner.go:130] > # metrics_key = ""
	I1212 20:29:42.182276   33042 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:29:42.182286   33042 command_runner.go:130] > [crio.tracing]
	I1212 20:29:42.182298   33042 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:29:42.182308   33042 command_runner.go:130] > # enable_tracing = false
	I1212 20:29:42.182319   33042 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:29:42.182328   33042 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 20:29:42.182333   33042 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 20:29:42.182340   33042 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:29:42.182346   33042 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:29:42.182350   33042 command_runner.go:130] > [crio.stats]
	I1212 20:29:42.182356   33042 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:29:42.182364   33042 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:29:42.182368   33042 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:29:42.182395   33042 command_runner.go:130] ! time="2023-12-12 20:29:42.121373129Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 20:29:42.182410   33042 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:29:42.182477   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:29:42.182488   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:29:42.182505   33042 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:29:42.182531   33042 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-562818 NodeName:multinode-562818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:29:42.182661   33042 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-562818"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:29:42.182732   33042 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-562818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:29:42.182785   33042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 20:29:42.191892   33042 command_runner.go:130] > kubeadm
	I1212 20:29:42.191909   33042 command_runner.go:130] > kubectl
	I1212 20:29:42.191913   33042 command_runner.go:130] > kubelet
	I1212 20:29:42.191949   33042 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 20:29:42.191991   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:29:42.200540   33042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1212 20:29:42.216618   33042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:29:42.232542   33042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 20:29:42.248913   33042 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I1212 20:29:42.252596   33042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:29:42.264665   33042 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818 for IP: 192.168.39.77
	I1212 20:29:42.264696   33042 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:29:42.264867   33042 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:29:42.264927   33042 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:29:42.265001   33042 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key
	I1212 20:29:42.265077   33042 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key.2f0f2646
	I1212 20:29:42.265135   33042 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key
	I1212 20:29:42.265149   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:29:42.265170   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:29:42.265187   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:29:42.265204   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:29:42.265224   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:29:42.265242   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:29:42.265260   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:29:42.265277   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:29:42.265343   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:29:42.265384   33042 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:29:42.265398   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:29:42.265455   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:29:42.265501   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:29:42.265532   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:29:42.265592   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:29:42.265642   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:29:42.265667   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem -> /usr/share/ca-certificates/16456.pem
	I1212 20:29:42.265681   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /usr/share/ca-certificates/164562.pem
	I1212 20:29:42.266245   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 20:29:42.290017   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:29:42.312987   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:29:42.335701   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:29:42.359695   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:29:42.383878   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:29:42.407682   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:29:42.430807   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:29:42.726154   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:29:42.749191   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:29:42.772009   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:29:42.794956   33042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:29:42.810977   33042 ssh_runner.go:195] Run: openssl version
	I1212 20:29:42.816728   33042 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 20:29:42.816807   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:29:42.827163   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:29:42.832045   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:29:42.832307   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:29:42.832366   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:29:42.838030   33042 command_runner.go:130] > b5213941
	I1212 20:29:42.838085   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:29:42.848093   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:29:42.857675   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:29:42.862102   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:29:42.862128   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:29:42.862164   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:29:42.867549   33042 command_runner.go:130] > 51391683
	I1212 20:29:42.867803   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:29:42.877570   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:29:42.887432   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:29:42.892104   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:29:42.892130   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:29:42.892180   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:29:42.897339   33042 command_runner.go:130] > 3ec20f2e
	I1212 20:29:42.897652   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:29:42.906940   33042 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:29:42.911044   33042 command_runner.go:130] > ca.crt
	I1212 20:29:42.911063   33042 command_runner.go:130] > ca.key
	I1212 20:29:42.911071   33042 command_runner.go:130] > healthcheck-client.crt
	I1212 20:29:42.911077   33042 command_runner.go:130] > healthcheck-client.key
	I1212 20:29:42.911084   33042 command_runner.go:130] > peer.crt
	I1212 20:29:42.911089   33042 command_runner.go:130] > peer.key
	I1212 20:29:42.911094   33042 command_runner.go:130] > server.crt
	I1212 20:29:42.911100   33042 command_runner.go:130] > server.key
	I1212 20:29:42.911204   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:29:42.916770   33042 command_runner.go:130] > Certificate will not expire
	I1212 20:29:42.916923   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:29:42.922514   33042 command_runner.go:130] > Certificate will not expire
	I1212 20:29:42.922582   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:29:42.928389   33042 command_runner.go:130] > Certificate will not expire
	I1212 20:29:42.928479   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:29:42.933998   33042 command_runner.go:130] > Certificate will not expire
	I1212 20:29:42.934203   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:29:42.940247   33042 command_runner.go:130] > Certificate will not expire
	I1212 20:29:42.940311   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:29:42.946111   33042 command_runner.go:130] > Certificate will not expire
	I1212 20:29:42.946544   33042 kubeadm.go:404] StartCluster: {Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:29:42.946678   33042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:29:42.946760   33042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:29:42.985431   33042 cri.go:89] found id: ""
	I1212 20:29:42.985503   33042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:29:42.995368   33042 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 20:29:42.995389   33042 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 20:29:42.995395   33042 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 20:29:42.995399   33042 command_runner.go:130] > member
	I1212 20:29:42.995518   33042 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 20:29:42.995551   33042 kubeadm.go:636] restartCluster start
	I1212 20:29:42.995614   33042 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:29:43.004673   33042 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:43.005191   33042 kubeconfig.go:92] found "multinode-562818" server: "https://192.168.39.77:8443"
	I1212 20:29:43.005633   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:29:43.005878   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:29:43.006466   33042 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 20:29:43.006640   33042 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:29:43.015635   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:43.015711   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:43.026190   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:43.026211   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:43.026250   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:43.036341   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:43.537063   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:43.537146   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:43.548562   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:44.037148   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:44.037263   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:44.049019   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:44.536505   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:44.536574   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:44.547684   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:45.037365   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:45.037459   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:45.048958   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:45.537094   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:45.537202   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:45.549386   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:46.036929   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:46.037025   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:46.048126   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:46.536673   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:46.536743   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:46.547675   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:47.037284   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:47.037386   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:47.048794   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:47.537217   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:47.537334   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:47.549100   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:48.037246   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:48.037322   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:48.048391   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:48.536923   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:48.537036   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:48.548361   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:49.036971   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:49.037064   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:49.048535   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:49.537144   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:49.537238   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:49.548302   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:50.036836   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:50.036918   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:50.047834   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:50.536953   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:50.537050   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:50.547884   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:51.036428   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:51.036505   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:51.047269   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:51.536780   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:51.536883   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:51.548239   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:52.036770   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:52.036840   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:52.048170   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:52.536642   33042 api_server.go:166] Checking apiserver status ...
	I1212 20:29:52.536725   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:29:52.549692   33042 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:29:53.015905   33042 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 20:29:53.015937   33042 kubeadm.go:1135] stopping kube-system containers ...
	I1212 20:29:53.015948   33042 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:29:53.016011   33042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:29:53.064638   33042 cri.go:89] found id: ""
	I1212 20:29:53.064707   33042 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:29:53.081844   33042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:29:53.091007   33042 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 20:29:53.091033   33042 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 20:29:53.091040   33042 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 20:29:53.091048   33042 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:29:53.091073   33042 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:29:53.091121   33042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:29:53.100420   33042 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 20:29:53.100441   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:29:53.222711   33042 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:29:53.222733   33042 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 20:29:53.222743   33042 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 20:29:53.222753   33042 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:29:53.222763   33042 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 20:29:53.222772   33042 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:29:53.222780   33042 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 20:29:53.222793   33042 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 20:29:53.222808   33042 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:29:53.222824   33042 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:29:53.222838   33042 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:29:53.222849   33042 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 20:29:53.222879   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:29:54.144496   33042 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:29:54.144519   33042 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:29:54.144525   33042 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:29:54.144531   33042 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:29:54.144536   33042 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:29:54.144804   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:29:54.343706   33042 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:29:54.343736   33042 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:29:54.343742   33042 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 20:29:54.343769   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:29:54.432456   33042 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:29:54.432486   33042 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:29:54.432496   33042 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:29:54.432506   33042 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:29:54.432724   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:29:54.505576   33042 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:29:54.505611   33042 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:29:54.505668   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:54.526046   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:55.039930   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:55.539494   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:56.040290   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:56.539844   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:57.040137   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:29:57.060420   33042 command_runner.go:130] > 1066
	I1212 20:29:57.060500   33042 api_server.go:72] duration metric: took 2.554885122s to wait for apiserver process to appear ...
	I1212 20:29:57.060512   33042 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:29:57.060533   33042 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:30:00.454923   33042 api_server.go:279] https://192.168.39.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:30:00.454958   33042 api_server.go:103] status: https://192.168.39.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:30:00.454972   33042 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:30:00.531187   33042 api_server.go:279] https://192.168.39.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:30:00.531222   33042 api_server.go:103] status: https://192.168.39.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:30:01.032033   33042 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:30:01.054484   33042 api_server.go:279] https://192.168.39.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 20:30:01.054520   33042 api_server.go:103] status: https://192.168.39.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 20:30:01.532306   33042 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:30:01.538198   33042 api_server.go:279] https://192.168.39.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 20:30:01.538229   33042 api_server.go:103] status: https://192.168.39.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 20:30:02.031326   33042 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:30:02.037345   33042 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I1212 20:30:02.037428   33042 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I1212 20:30:02.037437   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:02.037446   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:02.037457   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:02.046452   33042 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 20:30:02.046481   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:02.046491   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:02.046506   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:02.046515   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:02.046523   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:02.046533   33042 round_trippers.go:580]     Content-Length: 264
	I1212 20:30:02.046550   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:02 GMT
	I1212 20:30:02.046558   33042 round_trippers.go:580]     Audit-Id: 86a2445a-9b85-48f8-b803-d56d8f6805ed
	I1212 20:30:02.046596   33042 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 20:30:02.046688   33042 api_server.go:141] control plane version: v1.28.4
	I1212 20:30:02.046726   33042 api_server.go:131] duration metric: took 4.986202547s to wait for apiserver health ...
	I1212 20:30:02.046737   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:30:02.046743   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:30:02.048827   33042 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 20:30:02.050348   33042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:30:02.056586   33042 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 20:30:02.056610   33042 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 20:30:02.056617   33042 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 20:30:02.056623   33042 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:30:02.056629   33042 command_runner.go:130] > Access: 2023-12-12 20:29:28.351313795 +0000
	I1212 20:30:02.056633   33042 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 20:30:02.056641   33042 command_runner.go:130] > Change: 2023-12-12 20:29:26.512313795 +0000
	I1212 20:30:02.056645   33042 command_runner.go:130] >  Birth: -
	I1212 20:30:02.056678   33042 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 20:30:02.056692   33042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 20:30:02.080032   33042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:30:03.116981   33042 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:30:03.129237   33042 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:30:03.134163   33042 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 20:30:03.150112   33042 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 20:30:03.152581   33042 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.072508211s)
	I1212 20:30:03.152621   33042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:30:03.152717   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:30:03.152726   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.152734   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.152740   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.157322   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:30:03.157347   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.157354   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.157359   33042 round_trippers.go:580]     Audit-Id: b51431d2-31ad-4a3c-ac9d-e519f9052d4d
	I1212 20:30:03.157364   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.157369   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.157374   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.157383   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.159269   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83167 chars]
	I1212 20:30:03.163253   33042 system_pods.go:59] 12 kube-system pods found
	I1212 20:30:03.163289   33042 system_pods.go:61] "coredns-5dd5756b68-689lp" [e77852fc-eb8a-4027-98e1-070b4ca43f54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:30:03.163297   33042 system_pods.go:61] "etcd-multinode-562818" [5a874e4d-12ab-400c-8086-05073ffd1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:30:03.163304   33042 system_pods.go:61] "kindnet-24p9c" [e80eb9ab-2919-4be1-890d-34c26202f7fc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:03.163310   33042 system_pods.go:61] "kindnet-cmz7d" [b60f3109-0845-483d-81c9-1fe2bbffd622] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:03.163325   33042 system_pods.go:61] "kindnet-q7n6w" [ff09c341-d00a-4983-b169-5c19cf81b490] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:03.163335   33042 system_pods.go:61] "kube-apiserver-multinode-562818" [7d766a87-0f52-46ef-b1fb-392a197bca9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:30:03.163341   33042 system_pods.go:61] "kube-controller-manager-multinode-562818" [23b73a4b-e188-4b7c-a13d-1fd61862a4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:30:03.163349   33042 system_pods.go:61] "kube-proxy-4rrmn" [2bcd718f-0c7c-461a-895e-44a0c1d566fd] Running
	I1212 20:30:03.163353   33042 system_pods.go:61] "kube-proxy-sxw8h" [1f281e87-2597-4bd0-8ca4-cd7556c0a8e4] Running
	I1212 20:30:03.163360   33042 system_pods.go:61] "kube-proxy-xch7v" [c47d9b9f-ae3c-4404-a33a-d689c4b3e034] Running
	I1212 20:30:03.163365   33042 system_pods.go:61] "kube-scheduler-multinode-562818" [994614e5-3a18-422e-86ad-54c67237293d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:30:03.163373   33042 system_pods.go:61] "storage-provisioner" [9efe55ce-d87d-4074-9983-d880908d6d3d] Running
	I1212 20:30:03.163380   33042 system_pods.go:74] duration metric: took 10.753452ms to wait for pod list to return data ...
	I1212 20:30:03.163389   33042 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:30:03.163443   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I1212 20:30:03.163450   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.163457   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.163463   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.166688   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:03.166711   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.166719   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.166725   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.166730   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.166746   33042 round_trippers.go:580]     Audit-Id: c825de8e-c38b-4547-bb2b-08d03552b95e
	I1212 20:30:03.166755   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.166759   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.167673   33042 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16474 chars]
	I1212 20:30:03.168472   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:30:03.168495   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:30:03.168511   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:30:03.168519   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:30:03.168524   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:30:03.168529   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:30:03.168533   33042 node_conditions.go:105] duration metric: took 5.140705ms to run NodePressure ...
	I1212 20:30:03.168550   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:30:03.427582   33042 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 20:30:03.427610   33042 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 20:30:03.427633   33042 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 20:30:03.427733   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1212 20:30:03.427745   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.427756   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.427765   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.431090   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:03.431120   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.431130   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.431138   33042 round_trippers.go:580]     Audit-Id: cd2775eb-2f42-45de-9b9b-2d6edae7bf52
	I1212 20:30:03.431147   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.431153   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.431158   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.431163   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.432185   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"820"},"items":[{"metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"734","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I1212 20:30:03.433197   33042 kubeadm.go:787] kubelet initialised
	I1212 20:30:03.433217   33042 kubeadm.go:788] duration metric: took 5.576328ms waiting for restarted kubelet to initialise ...
	I1212 20:30:03.433225   33042 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:30:03.433286   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:30:03.433309   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.433319   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.433328   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.438158   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:30:03.438183   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.438193   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.438201   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.438207   33042 round_trippers.go:580]     Audit-Id: 2b27dbdf-41cf-4692-b9bf-4d158e8f1dce
	I1212 20:30:03.438215   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.438224   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.438232   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.439385   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"820"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83167 chars]
	I1212 20:30:03.442026   33042 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:03.442118   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:03.442129   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.442136   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.442145   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.444932   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.444951   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.444957   33042 round_trippers.go:580]     Audit-Id: 59d12c29-2470-4c9d-a5f5-41d3886d5dc7
	I1212 20:30:03.444963   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.444973   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.444978   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.444983   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.444988   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.445187   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:03.445835   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.445853   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.445864   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.445874   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.448463   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.448478   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.448485   33042 round_trippers.go:580]     Audit-Id: 51d935e4-9257-4bab-b75c-aaa68d9f863d
	I1212 20:30:03.448491   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.448496   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.448501   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.448506   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.448515   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.448654   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:03.449081   33042 pod_ready.go:97] node "multinode-562818" hosting pod "coredns-5dd5756b68-689lp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.449109   33042 pod_ready.go:81] duration metric: took 7.058449ms waiting for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	E1212 20:30:03.449121   33042 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-562818" hosting pod "coredns-5dd5756b68-689lp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.449135   33042 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:03.449207   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-562818
	I1212 20:30:03.449218   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.449226   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.449236   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.451505   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.451523   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.451536   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.451545   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.451554   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.451562   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.451573   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.451580   33042 round_trippers.go:580]     Audit-Id: 61429926-269f-45da-969e-143507d824ae
	I1212 20:30:03.451735   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"734","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 20:30:03.452191   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.452209   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.452219   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.452226   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.454917   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.454940   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.454950   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.454962   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.454971   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.454979   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.454988   33042 round_trippers.go:580]     Audit-Id: a1175ec0-921b-49ce-9b65-ff8f2e0690f5
	I1212 20:30:03.454997   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.455138   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:03.455481   33042 pod_ready.go:97] node "multinode-562818" hosting pod "etcd-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.455501   33042 pod_ready.go:81] duration metric: took 6.352129ms waiting for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	E1212 20:30:03.455510   33042 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-562818" hosting pod "etcd-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.455533   33042 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:03.455588   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:03.455597   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.455603   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.455609   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.457860   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.457882   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.457906   33042 round_trippers.go:580]     Audit-Id: fffd1994-0e35-4bd6-b692-ea291a12c031
	I1212 20:30:03.457915   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.457925   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.457934   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.457944   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.457951   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.458630   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:03.459008   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.459020   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.459027   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.459033   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.461254   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.461284   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.461295   33042 round_trippers.go:580]     Audit-Id: 8f80cd01-c956-4f54-b0e5-01839368887a
	I1212 20:30:03.461305   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.461312   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.461317   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.461323   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.461328   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.461493   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:03.461895   33042 pod_ready.go:97] node "multinode-562818" hosting pod "kube-apiserver-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.461925   33042 pod_ready.go:81] duration metric: took 6.3796ms waiting for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	E1212 20:30:03.461935   33042 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-562818" hosting pod "kube-apiserver-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.461941   33042 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:03.462000   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-562818
	I1212 20:30:03.462009   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.462016   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.462022   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.464235   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:03.464267   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.464275   33042 round_trippers.go:580]     Audit-Id: 8d2925c2-4f83-48bb-aba1-d149786cb495
	I1212 20:30:03.464280   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.464285   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.464291   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.464296   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.464301   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.464481   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-562818","namespace":"kube-system","uid":"23b73a4b-e188-4b7c-a13d-1fd61862a4e1","resourceVersion":"742","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.mirror":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.seen":"2023-12-12T20:19:35.712598374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I1212 20:30:03.553235   33042 request.go:629] Waited for 88.350039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.553310   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.553315   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.553322   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.553336   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.557109   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:03.557140   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.557148   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.557154   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.557159   33042 round_trippers.go:580]     Audit-Id: f5ef3855-80a2-4515-85d2-b6974543c0c7
	I1212 20:30:03.557164   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.557169   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.557174   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.557651   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:03.557959   33042 pod_ready.go:97] node "multinode-562818" hosting pod "kube-controller-manager-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.557980   33042 pod_ready.go:81] duration metric: took 96.032098ms waiting for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	E1212 20:30:03.557989   33042 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-562818" hosting pod "kube-controller-manager-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.557995   33042 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:03.753483   33042 request.go:629] Waited for 195.422835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:30:03.753586   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:30:03.753594   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.753602   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.753614   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.757089   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:03.757113   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.757120   33042 round_trippers.go:580]     Audit-Id: 91c8af02-ea65-4e44-ab4e-bf317a656298
	I1212 20:30:03.757126   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.757131   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.757135   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.757140   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.757145   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.757370   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4rrmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"2bcd718f-0c7c-461a-895e-44a0c1d566fd","resourceVersion":"816","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 20:30:03.953181   33042 request.go:629] Waited for 195.38973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.953257   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:03.953263   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:03.953271   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:03.953277   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:03.956343   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:03.956369   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:03.956380   33042 round_trippers.go:580]     Audit-Id: 862cbd95-f940-4947-9559-95cc06b263bf
	I1212 20:30:03.956390   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:03.956398   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:03.956406   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:03.956417   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:03.956427   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:03 GMT
	I1212 20:30:03.956630   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:03.956956   33042 pod_ready.go:97] node "multinode-562818" hosting pod "kube-proxy-4rrmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.956976   33042 pod_ready.go:81] duration metric: took 398.974832ms waiting for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	E1212 20:30:03.956989   33042 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-562818" hosting pod "kube-proxy-4rrmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:03.956996   33042 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:04.153419   33042 request.go:629] Waited for 196.369273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:30:04.153486   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:30:04.153491   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:04.153499   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:04.153520   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:04.156489   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:04.156513   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:04.156523   33042 round_trippers.go:580]     Audit-Id: 42dcbb66-d4b7-43c7-97fa-0c4bb711054f
	I1212 20:30:04.156531   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:04.156538   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:04.156546   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:04.156555   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:04.156563   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:04 GMT
	I1212 20:30:04.156714   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sxw8h","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f281e87-2597-4bd0-8ca4-cd7556c0a8e4","resourceVersion":"481","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 20:30:04.353530   33042 request.go:629] Waited for 196.385744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:30:04.353613   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:30:04.353619   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:04.353627   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:04.353633   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:04.355912   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:04.355934   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:04.355944   33042 round_trippers.go:580]     Audit-Id: ce32a257-df26-4bfd-b5a0-adead716d468
	I1212 20:30:04.355952   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:04.355960   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:04.355968   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:04.355978   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:04.355996   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:04 GMT
	I1212 20:30:04.356154   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"811","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_22_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I1212 20:30:04.356531   33042 pod_ready.go:92] pod "kube-proxy-sxw8h" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:04.356554   33042 pod_ready.go:81] duration metric: took 399.550922ms waiting for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:04.356568   33042 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:04.552950   33042 request.go:629] Waited for 196.290683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:30:04.553037   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:30:04.553047   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:04.553060   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:04.553073   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:04.555806   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:04.555905   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:04.555915   33042 round_trippers.go:580]     Audit-Id: 9bfcba8a-b12f-4633-8245-90877980232f
	I1212 20:30:04.555921   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:04.555926   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:04.555952   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:04.555960   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:04.555971   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:04 GMT
	I1212 20:30:04.556148   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xch7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"c47d9b9f-ae3c-4404-a33a-d689c4b3e034","resourceVersion":"686","creationTimestamp":"2023-12-12T20:21:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:21:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 20:30:04.752929   33042 request.go:629] Waited for 196.310694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:30:04.753030   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:30:04.753042   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:04.753053   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:04.753066   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:04.756040   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:04.756064   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:04.756073   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:04.756081   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:04.756090   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:04 GMT
	I1212 20:30:04.756098   33042 round_trippers.go:580]     Audit-Id: f80d0c9d-4247-49cc-bb31-f92d505ca8bc
	I1212 20:30:04.756106   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:04.756115   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:04.756440   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m03","uid":"86ea80af-5628-4573-839f-f5590d741ec8","resourceVersion":"709","creationTimestamp":"2023-12-12T20:22:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_22_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:22:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4085 chars]
	I1212 20:30:04.756810   33042 pod_ready.go:92] pod "kube-proxy-xch7v" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:04.756836   33042 pod_ready.go:81] duration metric: took 400.252922ms waiting for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:04.756848   33042 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:04.953347   33042 request.go:629] Waited for 196.406146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:04.953435   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:04.953452   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:04.953463   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:04.953477   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:04.956049   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:04.956074   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:04.956085   33042 round_trippers.go:580]     Audit-Id: 56074181-0adc-4c3c-8841-534535ca6cf2
	I1212 20:30:04.956094   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:04.956102   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:04.956109   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:04.956118   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:04.956125   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:04 GMT
	I1212 20:30:04.956310   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"747","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1212 20:30:05.153123   33042 request.go:629] Waited for 196.377837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:05.153206   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:05.153220   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:05.153232   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:05.153253   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:05.156321   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:05.156348   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:05.156358   33042 round_trippers.go:580]     Audit-Id: ad60f950-1829-4b54-905f-bfa32f1d4c0a
	I1212 20:30:05.156366   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:05.156374   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:05.156385   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:05.156397   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:05.156411   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:05 GMT
	I1212 20:30:05.156940   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:05.157423   33042 pod_ready.go:97] node "multinode-562818" hosting pod "kube-scheduler-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:05.157454   33042 pod_ready.go:81] duration metric: took 400.584507ms waiting for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	E1212 20:30:05.157468   33042 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-562818" hosting pod "kube-scheduler-multinode-562818" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-562818" has status "Ready":"False"
	I1212 20:30:05.157488   33042 pod_ready.go:38] duration metric: took 1.724255758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:30:05.157513   33042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:30:05.168949   33042 command_runner.go:130] > -16
	I1212 20:30:05.168992   33042 ops.go:34] apiserver oom_adj: -16
	I1212 20:30:05.168999   33042 kubeadm.go:640] restartCluster took 22.173439325s
	I1212 20:30:05.169005   33042 kubeadm.go:406] StartCluster complete in 22.222468708s
	I1212 20:30:05.169026   33042 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:30:05.169105   33042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:30:05.169946   33042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:30:05.170211   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:30:05.170352   33042 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 20:30:05.173236   33042 out.go:177] * Enabled addons: 
	I1212 20:30:05.170526   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:30:05.170559   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:30:05.174649   33042 addons.go:502] enable addons completed in 4.29519ms: enabled=[]
	I1212 20:30:05.174949   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:30:05.175290   33042 round_trippers.go:463] GET https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:30:05.175302   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:05.175310   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:05.175316   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:05.178313   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:05.178328   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:05.178334   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:05.178339   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:05.178344   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:05.178351   33042 round_trippers.go:580]     Content-Length: 291
	I1212 20:30:05.178360   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:05 GMT
	I1212 20:30:05.178368   33042 round_trippers.go:580]     Audit-Id: 9cc41f65-283b-47d8-a44b-af45ed991dff
	I1212 20:30:05.178381   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:05.178601   33042 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"819","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 20:30:05.178818   33042 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-562818" context rescaled to 1 replicas
	I1212 20:30:05.178854   33042 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:30:05.181795   33042 out.go:177] * Verifying Kubernetes components...
	I1212 20:30:05.183324   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:05.274130   33042 command_runner.go:130] > apiVersion: v1
	I1212 20:30:05.274157   33042 command_runner.go:130] > data:
	I1212 20:30:05.274165   33042 command_runner.go:130] >   Corefile: |
	I1212 20:30:05.274171   33042 command_runner.go:130] >     .:53 {
	I1212 20:30:05.274177   33042 command_runner.go:130] >         log
	I1212 20:30:05.274183   33042 command_runner.go:130] >         errors
	I1212 20:30:05.274189   33042 command_runner.go:130] >         health {
	I1212 20:30:05.274196   33042 command_runner.go:130] >            lameduck 5s
	I1212 20:30:05.274202   33042 command_runner.go:130] >         }
	I1212 20:30:05.274209   33042 command_runner.go:130] >         ready
	I1212 20:30:05.274220   33042 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 20:30:05.274231   33042 command_runner.go:130] >            pods insecure
	I1212 20:30:05.274240   33042 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 20:30:05.274246   33042 command_runner.go:130] >            ttl 30
	I1212 20:30:05.274252   33042 command_runner.go:130] >         }
	I1212 20:30:05.274262   33042 command_runner.go:130] >         prometheus :9153
	I1212 20:30:05.274267   33042 command_runner.go:130] >         hosts {
	I1212 20:30:05.274272   33042 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1212 20:30:05.274288   33042 command_runner.go:130] >            fallthrough
	I1212 20:30:05.274293   33042 command_runner.go:130] >         }
	I1212 20:30:05.274297   33042 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 20:30:05.274302   33042 command_runner.go:130] >            max_concurrent 1000
	I1212 20:30:05.274307   33042 command_runner.go:130] >         }
	I1212 20:30:05.274311   33042 command_runner.go:130] >         cache 30
	I1212 20:30:05.274319   33042 command_runner.go:130] >         loop
	I1212 20:30:05.274323   33042 command_runner.go:130] >         reload
	I1212 20:30:05.274328   33042 command_runner.go:130] >         loadbalance
	I1212 20:30:05.274332   33042 command_runner.go:130] >     }
	I1212 20:30:05.274340   33042 command_runner.go:130] > kind: ConfigMap
	I1212 20:30:05.274348   33042 command_runner.go:130] > metadata:
	I1212 20:30:05.274356   33042 command_runner.go:130] >   creationTimestamp: "2023-12-12T20:19:35Z"
	I1212 20:30:05.274365   33042 command_runner.go:130] >   name: coredns
	I1212 20:30:05.274373   33042 command_runner.go:130] >   namespace: kube-system
	I1212 20:30:05.274383   33042 command_runner.go:130] >   resourceVersion: "364"
	I1212 20:30:05.274391   33042 command_runner.go:130] >   uid: 9a863f66-aa0a-4fa3-b434-57a840a88dcb
	I1212 20:30:05.274474   33042 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 20:30:05.274555   33042 node_ready.go:35] waiting up to 6m0s for node "multinode-562818" to be "Ready" ...
	I1212 20:30:05.352902   33042 request.go:629] Waited for 78.182965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:05.352961   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:05.352968   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:05.352979   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:05.352990   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:05.355571   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:05.355602   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:05.355610   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:05.355615   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:05.355621   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:05 GMT
	I1212 20:30:05.355628   33042 round_trippers.go:580]     Audit-Id: e6251b90-72ec-414c-ba29-bb98e02b10e5
	I1212 20:30:05.355636   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:05.355646   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:05.355808   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:05.553621   33042 request.go:629] Waited for 197.26444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:05.553673   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:05.553678   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:05.553686   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:05.553692   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:05.556474   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:05.556500   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:05.556510   33042 round_trippers.go:580]     Audit-Id: a6d1bacb-0b77-46c4-87e1-690d439f55d0
	I1212 20:30:05.556517   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:05.556525   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:05.556533   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:05.556540   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:05.556547   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:05 GMT
	I1212 20:30:05.556708   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"716","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 20:30:06.057918   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:06.057944   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:06.057952   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:06.057962   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:06.061069   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:06.061090   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:06.061097   33042 round_trippers.go:580]     Audit-Id: f4f8da8c-a368-4ed7-9810-5aa9db27e736
	I1212 20:30:06.061102   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:06.061107   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:06.061112   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:06.061117   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:06.061122   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:06 GMT
	I1212 20:30:06.061264   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:06.061564   33042 node_ready.go:49] node "multinode-562818" has status "Ready":"True"
	I1212 20:30:06.061580   33042 node_ready.go:38] duration metric: took 786.992775ms waiting for node "multinode-562818" to be "Ready" ...
	I1212 20:30:06.061588   33042 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:30:06.061637   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:30:06.061646   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:06.061653   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:06.061659   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:06.065525   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:06.065562   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:06.065572   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:06.065579   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:06.065586   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:06.065594   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:06.065607   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:06 GMT
	I1212 20:30:06.065616   33042 round_trippers.go:580]     Audit-Id: 546a5485-df94-4b6a-b949-687424af5850
	I1212 20:30:06.067354   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"829"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82917 chars]
	I1212 20:30:06.069997   33042 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:06.153367   33042 request.go:629] Waited for 83.29333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:06.153435   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:06.153441   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:06.153448   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:06.153454   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:06.156380   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:06.156409   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:06.156419   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:06.156441   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:06 GMT
	I1212 20:30:06.156450   33042 round_trippers.go:580]     Audit-Id: f2e0a60f-e065-4b28-b714-10d5c88f0ce2
	I1212 20:30:06.156457   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:06.156465   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:06.156477   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:06.156765   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:06.353659   33042 request.go:629] Waited for 196.394283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:06.353730   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:06.353740   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:06.353752   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:06.353762   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:06.357287   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:06.357312   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:06.357322   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:06.357341   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:06.357350   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:06.357358   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:06.357366   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:06 GMT
	I1212 20:30:06.357375   33042 round_trippers.go:580]     Audit-Id: b7411728-2e6f-44e4-93d8-3539bee59427
	I1212 20:30:06.357982   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:06.553784   33042 request.go:629] Waited for 195.383728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:06.553878   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:06.553883   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:06.553891   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:06.553897   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:06.556930   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:06.556956   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:06.556966   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:06.556975   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:06.556984   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:06 GMT
	I1212 20:30:06.557001   33042 round_trippers.go:580]     Audit-Id: b2782ceb-dc87-46c6-9011-3dfc5a3bbb20
	I1212 20:30:06.557013   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:06.557027   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:06.557392   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:06.753232   33042 request.go:629] Waited for 195.399869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:06.753314   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:06.753319   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:06.753326   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:06.753332   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:06.756916   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:06.756943   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:06.756951   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:06.756956   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:06.756962   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:06.756967   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:06.756975   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:06 GMT
	I1212 20:30:06.756983   33042 round_trippers.go:580]     Audit-Id: 3f767686-f80d-4d6f-8ac0-1c3e5755474c
	I1212 20:30:06.757795   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:07.258953   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:07.258979   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:07.258986   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:07.258993   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:07.261772   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:07.261797   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:07.261807   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:07.261815   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:07.261823   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:07.261830   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:07 GMT
	I1212 20:30:07.261837   33042 round_trippers.go:580]     Audit-Id: 93a36077-21c9-43eb-a660-431e16a13097
	I1212 20:30:07.261849   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:07.262084   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:07.262591   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:07.262611   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:07.262619   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:07.262625   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:07.264867   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:07.264898   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:07.264907   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:07.264916   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:07 GMT
	I1212 20:30:07.264924   33042 round_trippers.go:580]     Audit-Id: 8bbf860a-53f2-4265-abfd-599531494433
	I1212 20:30:07.264938   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:07.264946   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:07.264951   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:07.265112   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:07.759231   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:07.759271   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:07.759280   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:07.759286   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:07.761815   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:07.761836   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:07.761848   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:07.761857   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:07.761863   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:07.761870   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:07.761878   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:07 GMT
	I1212 20:30:07.761887   33042 round_trippers.go:580]     Audit-Id: adf35ec0-68b1-4fb1-a4c9-914aafdeef1f
	I1212 20:30:07.762054   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:07.762612   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:07.762629   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:07.762636   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:07.762642   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:07.769653   33042 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 20:30:07.769676   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:07.769687   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:07.769695   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:07.769705   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:07 GMT
	I1212 20:30:07.769713   33042 round_trippers.go:580]     Audit-Id: 799db285-2405-41b3-9d7d-a63d76750842
	I1212 20:30:07.769740   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:07.769750   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:07.769936   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:08.258503   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:08.258532   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:08.258543   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:08.258553   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:08.261441   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:08.261466   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:08.261477   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:08.261485   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:08 GMT
	I1212 20:30:08.261493   33042 round_trippers.go:580]     Audit-Id: bef7af97-6f8d-4112-84a0-81acfd876daa
	I1212 20:30:08.261503   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:08.261511   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:08.261519   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:08.261878   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:08.262380   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:08.262399   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:08.262410   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:08.262419   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:08.264779   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:08.264804   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:08.264815   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:08.264827   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:08.264836   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:08 GMT
	I1212 20:30:08.264845   33042 round_trippers.go:580]     Audit-Id: c4da8631-8a32-4c06-8758-0b30e48be905
	I1212 20:30:08.264853   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:08.264862   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:08.265035   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:08.265403   33042 pod_ready.go:102] pod "coredns-5dd5756b68-689lp" in "kube-system" namespace has status "Ready":"False"
	I1212 20:30:08.758457   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:08.758488   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:08.758496   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:08.758504   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:08.766176   33042 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 20:30:08.766203   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:08.766213   33042 round_trippers.go:580]     Audit-Id: fa20d6d5-3656-42d3-a6ac-5ea5cbeac166
	I1212 20:30:08.766220   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:08.766229   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:08.766237   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:08.766245   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:08.766252   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:08 GMT
	I1212 20:30:08.766779   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:08.767224   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:08.767252   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:08.767260   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:08.767265   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:08.778360   33042 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1212 20:30:08.778388   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:08.778399   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:08.778420   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:08.778430   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:08 GMT
	I1212 20:30:08.778444   33042 round_trippers.go:580]     Audit-Id: cd43fb71-7a1c-4ce9-8d59-3cad9f6779ab
	I1212 20:30:08.778454   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:08.778461   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:08.779041   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:09.258682   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:09.258712   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.258722   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.258728   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.261823   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:09.261854   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.261867   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.261877   33042 round_trippers.go:580]     Audit-Id: 5230492c-ccdf-410e-a586-769e28ba6e29
	I1212 20:30:09.261885   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.261892   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.261898   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.261908   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.262109   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"728","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 20:30:09.262648   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:09.262670   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.262681   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.262689   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.265078   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:09.265098   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.265108   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.265115   33042 round_trippers.go:580]     Audit-Id: 6a6b8d52-af69-4d92-b614-58a25d8529d5
	I1212 20:30:09.265123   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.265131   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.265140   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.265147   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.265572   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:09.759317   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:30:09.759343   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.759352   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.759358   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.763321   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:09.763348   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.763357   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.763365   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.763373   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.763397   33042 round_trippers.go:580]     Audit-Id: 2b620431-bd33-47d5-a05f-81ca297fff5c
	I1212 20:30:09.763405   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.763410   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.763871   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 20:30:09.764304   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:09.764319   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.764326   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.764337   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.767171   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:09.767192   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.767199   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.767205   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.767215   33042 round_trippers.go:580]     Audit-Id: c3f94678-5fc5-4420-bb7d-d15b5c2ac6b7
	I1212 20:30:09.767220   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.767225   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.767230   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.767419   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:09.767701   33042 pod_ready.go:92] pod "coredns-5dd5756b68-689lp" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:09.767720   33042 pod_ready.go:81] duration metric: took 3.697697074s waiting for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:09.767729   33042 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:09.767791   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-562818
	I1212 20:30:09.767801   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.767807   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.767813   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.770149   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:09.770169   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.770178   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.770185   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.770194   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.770202   33042 round_trippers.go:580]     Audit-Id: 1ea955e8-da4e-4633-b8cc-e6b0e1bffbb2
	I1212 20:30:09.770210   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.770217   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.770396   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"831","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 20:30:09.770763   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:09.770780   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.770787   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.770793   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.772824   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:09.772842   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.772848   33042 round_trippers.go:580]     Audit-Id: c54d3aed-8299-4035-9174-d6319576a9c9
	I1212 20:30:09.772854   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.772859   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.772864   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.772870   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.772878   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.772992   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:09.773280   33042 pod_ready.go:92] pod "etcd-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:09.773296   33042 pod_ready.go:81] duration metric: took 5.562669ms waiting for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:09.773314   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:09.773362   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:09.773369   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.773376   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.773381   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.777039   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:09.777059   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.777069   33042 round_trippers.go:580]     Audit-Id: eddf0001-76af-4ea9-8013-052aefb1b69f
	I1212 20:30:09.777078   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.777087   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.777095   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.777104   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.777113   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.777291   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:09.953173   33042 request.go:629] Waited for 175.365428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:09.953265   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:09.953273   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:09.953287   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:09.953310   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:09.956182   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:09.956203   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:09.956211   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:09.956222   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:09 GMT
	I1212 20:30:09.956227   33042 round_trippers.go:580]     Audit-Id: 206ddcce-ab62-4f83-a307-ccfeccc99d34
	I1212 20:30:09.956233   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:09.956238   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:09.956242   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:09.956419   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:10.153213   33042 request.go:629] Waited for 196.419463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:10.153290   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:10.153296   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:10.153306   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:10.153316   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:10.158167   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:30:10.158190   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:10.158197   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:10.158205   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:10 GMT
	I1212 20:30:10.158210   33042 round_trippers.go:580]     Audit-Id: 6049d240-b67a-407f-a082-663a2f2ba751
	I1212 20:30:10.158215   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:10.158220   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:10.158224   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:10.158491   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:10.353367   33042 request.go:629] Waited for 194.350874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:10.353414   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:10.353419   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:10.353428   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:10.353437   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:10.356517   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:10.356542   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:10.356555   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:10.356561   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:10.356566   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:10.356571   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:10.356576   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:10 GMT
	I1212 20:30:10.356583   33042 round_trippers.go:580]     Audit-Id: 691f3c3a-cf8b-4c35-9f40-21c9c15e8296
	I1212 20:30:10.356790   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:10.857958   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:10.857983   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:10.857995   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:10.858006   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:10.867676   33042 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 20:30:10.867704   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:10.867715   33042 round_trippers.go:580]     Audit-Id: f8fb8226-cf00-4baf-a979-9ea21d16f1f5
	I1212 20:30:10.867742   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:10.867751   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:10.867760   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:10.867775   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:10.867783   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:10 GMT
	I1212 20:30:10.868233   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:10.868737   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:10.868753   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:10.868760   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:10.868766   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:10.870968   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:10.870988   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:10.870995   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:10.871001   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:10.871006   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:10 GMT
	I1212 20:30:10.871012   33042 round_trippers.go:580]     Audit-Id: ff8e7c2f-3834-4461-8f57-ea3cb1bba63d
	I1212 20:30:10.871020   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:10.871028   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:10.871471   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:11.358260   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:11.358289   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:11.358297   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:11.358303   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:11.361161   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:11.361186   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:11.361195   33042 round_trippers.go:580]     Audit-Id: 6ae84223-3fb9-478a-a13c-4934235d5dae
	I1212 20:30:11.361203   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:11.361208   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:11.361213   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:11.361217   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:11.361223   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:11 GMT
	I1212 20:30:11.361712   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:11.362110   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:11.362128   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:11.362136   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:11.362142   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:11.364476   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:11.364498   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:11.364508   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:11.364515   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:11 GMT
	I1212 20:30:11.364522   33042 round_trippers.go:580]     Audit-Id: 147102c3-10b1-4847-a4bd-c0ead8ef7512
	I1212 20:30:11.364529   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:11.364536   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:11.364544   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:11.364759   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:11.858249   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:11.858274   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:11.858285   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:11.858296   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:11.860967   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:11.860991   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:11.861001   33042 round_trippers.go:580]     Audit-Id: 806af632-c0ab-44ca-bb4d-7c95b361784f
	I1212 20:30:11.861010   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:11.861019   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:11.861031   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:11.861055   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:11.861071   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:11 GMT
	I1212 20:30:11.861577   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:11.861981   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:11.861996   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:11.862006   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:11.862015   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:11.865020   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:11.865036   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:11.865045   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:11 GMT
	I1212 20:30:11.865052   33042 round_trippers.go:580]     Audit-Id: 8b413612-331d-453a-aff1-0f30483d7589
	I1212 20:30:11.865061   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:11.865070   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:11.865079   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:11.865090   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:11.865416   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:11.865712   33042 pod_ready.go:102] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"False"
	I1212 20:30:12.358192   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:12.358220   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:12.358229   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:12.358234   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:12.361494   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:12.361519   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:12.361529   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:12 GMT
	I1212 20:30:12.361537   33042 round_trippers.go:580]     Audit-Id: 28be27b5-646d-4e7f-85df-5e487617c84a
	I1212 20:30:12.361545   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:12.361554   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:12.361563   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:12.361573   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:12.362299   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:12.362811   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:12.362831   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:12.362838   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:12.362843   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:12.365262   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:12.365279   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:12.365298   33042 round_trippers.go:580]     Audit-Id: f774e363-0628-49b6-83df-a68152a7a5bb
	I1212 20:30:12.365307   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:12.365319   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:12.365337   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:12.365350   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:12.365359   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:12 GMT
	I1212 20:30:12.365570   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:12.858292   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:12.858315   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:12.858324   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:12.858330   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:12.861037   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:12.861058   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:12.861086   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:12.861097   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:12.861111   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:12.861121   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:12.861132   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:12 GMT
	I1212 20:30:12.861145   33042 round_trippers.go:580]     Audit-Id: 3a7d946c-1cba-4860-a87e-38295be6a0df
	I1212 20:30:12.861401   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:12.861831   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:12.861845   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:12.861856   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:12.861865   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:12.864103   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:12.864118   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:12.864124   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:12.864130   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:12 GMT
	I1212 20:30:12.864135   33042 round_trippers.go:580]     Audit-Id: 14b7c8e4-49e0-4ef1-9530-e0cb3ef46243
	I1212 20:30:12.864140   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:12.864146   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:12.864159   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:12.864324   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:13.358172   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:13.358195   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:13.358209   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:13.358215   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:13.361127   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:13.361145   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:13.361152   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:13.361157   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:13.361162   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:13.361167   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:13 GMT
	I1212 20:30:13.361173   33042 round_trippers.go:580]     Audit-Id: e971d771-2524-4586-9704-64a8e6c2b4c3
	I1212 20:30:13.361193   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:13.361444   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:13.361885   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:13.361900   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:13.361906   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:13.361912   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:13.364007   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:13.364024   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:13.364033   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:13.364041   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:13 GMT
	I1212 20:30:13.364049   33042 round_trippers.go:580]     Audit-Id: 91ba61f4-f3a4-4883-a20c-3c0c7ed43728
	I1212 20:30:13.364058   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:13.364066   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:13.364077   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:13.364385   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:13.858093   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:13.858122   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:13.858130   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:13.858141   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:13.861261   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:13.861285   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:13.861301   33042 round_trippers.go:580]     Audit-Id: 39489366-78ed-4a0e-9d01-188d2d2bfe41
	I1212 20:30:13.861309   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:13.861316   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:13.861324   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:13.861332   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:13.861340   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:13 GMT
	I1212 20:30:13.861740   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:13.862270   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:13.862286   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:13.862293   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:13.862299   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:13.864730   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:13.864749   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:13.864759   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:13.864776   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:13 GMT
	I1212 20:30:13.864800   33042 round_trippers.go:580]     Audit-Id: 70903a04-1a83-42e6-a3e1-798db640e644
	I1212 20:30:13.864809   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:13.864818   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:13.864827   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:13.865092   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:14.357749   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:14.357778   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.357786   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.357795   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.360614   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:14.360638   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.360646   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.360651   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.360656   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.360661   33042 round_trippers.go:580]     Audit-Id: e32a0d83-50fc-439c-923e-3114839d2a68
	I1212 20:30:14.360666   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.360670   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.360871   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"738","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 20:30:14.361345   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:14.361361   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.361368   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.361374   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.363838   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:14.363854   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.363860   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.363866   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.363870   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.363876   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.363880   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.363885   33042 round_trippers.go:580]     Audit-Id: 5dccb60b-5156-4f1d-9246-98dd44b537de
	I1212 20:30:14.364240   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:14.364521   33042 pod_ready.go:102] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"False"
	I1212 20:30:14.857963   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:30:14.858012   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.858022   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.858030   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.860677   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:14.860697   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.860703   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.860709   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.860714   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.860718   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.860724   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.860728   33042 round_trippers.go:580]     Audit-Id: fc7be846-2e3e-406e-96b1-6a05031974ba
	I1212 20:30:14.860997   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"857","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 20:30:14.861453   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:14.861469   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.861476   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.861484   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.864364   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:14.864389   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.864398   33042 round_trippers.go:580]     Audit-Id: 19448ff1-303c-48ef-9e55-4351d79f3100
	I1212 20:30:14.864404   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.864410   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.864414   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.864420   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.864425   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.864859   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:14.865268   33042 pod_ready.go:92] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:14.865289   33042 pod_ready.go:81] duration metric: took 5.091968499s waiting for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:14.865301   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:14.865365   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-562818
	I1212 20:30:14.865377   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.865393   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.865408   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.867647   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:14.867668   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.867678   33042 round_trippers.go:580]     Audit-Id: 3bbcc74a-38c1-43e1-8924-819e55e4f5b3
	I1212 20:30:14.867687   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.867698   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.867709   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.867717   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.867729   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.868207   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-562818","namespace":"kube-system","uid":"23b73a4b-e188-4b7c-a13d-1fd61862a4e1","resourceVersion":"846","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.mirror":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.seen":"2023-12-12T20:19:35.712598374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 20:30:14.868664   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:14.868681   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.868688   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.868694   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.870542   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:30:14.870560   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.870570   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.870582   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.870594   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.870604   33042 round_trippers.go:580]     Audit-Id: 73f048ea-6d7f-4dc3-9952-02942c572674
	I1212 20:30:14.870611   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.870616   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.870787   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:14.871178   33042 pod_ready.go:92] pod "kube-controller-manager-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:14.871199   33042 pod_ready.go:81] duration metric: took 5.885529ms waiting for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:14.871208   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:14.871283   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:30:14.871294   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.871305   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.871313   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.873223   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:30:14.873241   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.873250   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.873259   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.873266   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.873271   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.873279   33042 round_trippers.go:580]     Audit-Id: 4260fa11-46b5-4235-a723-f874c8caaed6
	I1212 20:30:14.873284   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.873380   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4rrmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"2bcd718f-0c7c-461a-895e-44a0c1d566fd","resourceVersion":"816","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 20:30:14.873823   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:14.873838   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.873849   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.873858   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.875750   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:30:14.875769   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.875776   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.875786   33042 round_trippers.go:580]     Audit-Id: f6198db6-4ca8-4aa8-a5d5-924872605afd
	I1212 20:30:14.875794   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.875804   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.875812   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.875820   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.875948   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:14.876263   33042 pod_ready.go:92] pod "kube-proxy-4rrmn" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:14.876279   33042 pod_ready.go:81] duration metric: took 5.060193ms waiting for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:14.876286   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:14.953610   33042 request.go:629] Waited for 77.270541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:30:14.953704   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:30:14.953713   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:14.953725   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:14.953742   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:14.956533   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:14.956555   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:14.956562   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:14.956567   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:14 GMT
	I1212 20:30:14.956574   33042 round_trippers.go:580]     Audit-Id: ff58ebcf-a8bd-4eb0-bbb1-b0d0810d756e
	I1212 20:30:14.956580   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:14.956585   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:14.956589   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:14.956827   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sxw8h","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f281e87-2597-4bd0-8ca4-cd7556c0a8e4","resourceVersion":"481","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 20:30:15.153624   33042 request.go:629] Waited for 196.375015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:30:15.153695   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:30:15.153705   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:15.153715   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:15.153731   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:15.156858   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:15.156888   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:15.156898   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:15.156907   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:15.156914   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:15 GMT
	I1212 20:30:15.156922   33042 round_trippers.go:580]     Audit-Id: 68c8fc16-9aff-42a7-9cb2-3c498ed3224e
	I1212 20:30:15.156929   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:15.156943   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:15.157092   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0","resourceVersion":"811","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_22_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I1212 20:30:15.157462   33042 pod_ready.go:92] pod "kube-proxy-sxw8h" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:15.157488   33042 pod_ready.go:81] duration metric: took 281.195901ms waiting for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:15.157501   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:15.352863   33042 request.go:629] Waited for 195.303406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:30:15.352918   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:30:15.352922   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:15.352930   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:15.352936   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:15.355991   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:15.356015   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:15.356025   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:15.356032   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:15.356040   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:15 GMT
	I1212 20:30:15.356047   33042 round_trippers.go:580]     Audit-Id: a4e4525e-3cf3-451a-99d3-ebd2a18959aa
	I1212 20:30:15.356055   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:15.356064   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:15.356250   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xch7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"c47d9b9f-ae3c-4404-a33a-d689c4b3e034","resourceVersion":"686","creationTimestamp":"2023-12-12T20:21:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:21:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 20:30:15.552820   33042 request.go:629] Waited for 196.13471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:30:15.552890   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:30:15.552895   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:15.552902   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:15.552909   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:15.555565   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:15.555590   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:15.555600   33042 round_trippers.go:580]     Audit-Id: 228a8bbe-ed8a-438d-8bd8-884bbad3b99e
	I1212 20:30:15.555609   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:15.555615   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:15.555621   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:15.555628   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:15.555633   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:15 GMT
	I1212 20:30:15.555745   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m03","uid":"86ea80af-5628-4573-839f-f5590d741ec8","resourceVersion":"852","creationTimestamp":"2023-12-12T20:22:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_22_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:22:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1212 20:30:15.556163   33042 pod_ready.go:92] pod "kube-proxy-xch7v" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:15.556189   33042 pod_ready.go:81] duration metric: took 398.680828ms waiting for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:15.556202   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:15.753619   33042 request.go:629] Waited for 197.34738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:15.753694   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:15.753701   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:15.753715   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:15.753724   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:15.757973   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:30:15.757999   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:15.758009   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:15.758020   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:15.758029   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:15.758039   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:15.758049   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:15 GMT
	I1212 20:30:15.758063   33042 round_trippers.go:580]     Audit-Id: b604c9b7-cb99-4943-9bc7-e99bc52160ac
	I1212 20:30:15.758224   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"747","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1212 20:30:15.953144   33042 request.go:629] Waited for 194.407033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:15.953217   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:15.953223   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:15.953231   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:15.953237   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:15.956923   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:15.956950   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:15.956960   33042 round_trippers.go:580]     Audit-Id: cc4c44c8-03df-408c-b0ff-3d7a74b73aaf
	I1212 20:30:15.956968   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:15.956975   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:15.956982   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:15.956990   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:15.957002   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:15 GMT
	I1212 20:30:15.957124   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:16.152912   33042 request.go:629] Waited for 195.312823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:16.152989   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:16.152994   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:16.153002   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:16.153007   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:16.156254   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:16.156282   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:16.156291   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:16.156299   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:16.156305   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:16.156312   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:16.156320   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:16 GMT
	I1212 20:30:16.156331   33042 round_trippers.go:580]     Audit-Id: 4c523875-84b8-4acd-bd5c-eaeb69adf7b5
	I1212 20:30:16.156513   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"747","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1212 20:30:16.353412   33042 request.go:629] Waited for 196.381018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:16.353482   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:16.353487   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:16.353494   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:16.353502   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:16.356387   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:16.356410   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:16.356421   33042 round_trippers.go:580]     Audit-Id: ff1916d8-7d97-471a-8428-c02b90ab8879
	I1212 20:30:16.356430   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:16.356439   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:16.356445   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:16.356451   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:16.356456   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:16 GMT
	I1212 20:30:16.356643   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:16.857865   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:16.857889   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:16.857897   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:16.857903   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:16.861070   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:16.861091   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:16.861101   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:16.861109   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:16.861117   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:16.861127   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:16.861142   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:16 GMT
	I1212 20:30:16.861151   33042 round_trippers.go:580]     Audit-Id: d1aecc1b-6b2c-4542-9513-bbdd028e5d02
	I1212 20:30:16.861988   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"747","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1212 20:30:16.862346   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:16.862361   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:16.862370   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:16.862376   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:16.864662   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:16.864679   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:16.864688   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:16.864696   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:16.864705   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:16.864715   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:16.864730   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:16 GMT
	I1212 20:30:16.864746   33042 round_trippers.go:580]     Audit-Id: 1134feb1-dd33-4792-86f7-61dcea3d1e28
	I1212 20:30:16.864953   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:17.357589   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:30:17.357616   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.357624   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.357630   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.360546   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:17.360569   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.360579   33042 round_trippers.go:580]     Audit-Id: 202fcdbf-3197-4a45-be34-c37e82802833
	I1212 20:30:17.360586   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.360593   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.360601   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.360609   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.360622   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.360848   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"859","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 20:30:17.361211   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:30:17.361226   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.361233   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.361247   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.363617   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:30:17.363640   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.363649   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.363658   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.363666   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.363677   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.363685   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.363697   33042 round_trippers.go:580]     Audit-Id: 449e6d79-51f7-4c20-90e8-17acec8d9177
	I1212 20:30:17.364322   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 20:30:17.364618   33042 pod_ready.go:92] pod "kube-scheduler-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:30:17.364634   33042 pod_ready.go:81] duration metric: took 1.808418762s waiting for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:30:17.364643   33042 pod_ready.go:38] duration metric: took 11.303047675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:30:17.364656   33042 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:30:17.364697   33042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:30:17.378601   33042 command_runner.go:130] > 1066
	I1212 20:30:17.378638   33042 api_server.go:72] duration metric: took 12.199745734s to wait for apiserver process to appear ...
	I1212 20:30:17.378648   33042 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:30:17.378666   33042 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:30:17.384469   33042 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I1212 20:30:17.384532   33042 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I1212 20:30:17.384541   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.384548   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.384555   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.385473   33042 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 20:30:17.385486   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.385492   33042 round_trippers.go:580]     Content-Length: 264
	I1212 20:30:17.385497   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.385502   33042 round_trippers.go:580]     Audit-Id: 488df9f6-2684-4f8c-b9a9-9ccacdf4f5f5
	I1212 20:30:17.385508   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.385513   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.385518   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.385524   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.385536   33042 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 20:30:17.385573   33042 api_server.go:141] control plane version: v1.28.4
	I1212 20:30:17.385586   33042 api_server.go:131] duration metric: took 6.932594ms to wait for apiserver health ...
	I1212 20:30:17.385591   33042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:30:17.385636   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:30:17.385643   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.385649   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.385655   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.389024   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:17.389044   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.389054   33042 round_trippers.go:580]     Audit-Id: 01c89a29-5a43-4bdc-8788-4de716608d76
	I1212 20:30:17.389063   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.389072   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.389081   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.389089   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.389098   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.390518   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I1212 20:30:17.393220   33042 system_pods.go:59] 12 kube-system pods found
	I1212 20:30:17.393246   33042 system_pods.go:61] "coredns-5dd5756b68-689lp" [e77852fc-eb8a-4027-98e1-070b4ca43f54] Running
	I1212 20:30:17.393250   33042 system_pods.go:61] "etcd-multinode-562818" [5a874e4d-12ab-400c-8086-05073ffd1b13] Running
	I1212 20:30:17.393256   33042 system_pods.go:61] "kindnet-24p9c" [e80eb9ab-2919-4be1-890d-34c26202f7fc] Running
	I1212 20:30:17.393261   33042 system_pods.go:61] "kindnet-cmz7d" [b60f3109-0845-483d-81c9-1fe2bbffd622] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:17.393267   33042 system_pods.go:61] "kindnet-q7n6w" [ff09c341-d00a-4983-b169-5c19cf81b490] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:17.393271   33042 system_pods.go:61] "kube-apiserver-multinode-562818" [7d766a87-0f52-46ef-b1fb-392a197bca9a] Running
	I1212 20:30:17.393276   33042 system_pods.go:61] "kube-controller-manager-multinode-562818" [23b73a4b-e188-4b7c-a13d-1fd61862a4e1] Running
	I1212 20:30:17.393280   33042 system_pods.go:61] "kube-proxy-4rrmn" [2bcd718f-0c7c-461a-895e-44a0c1d566fd] Running
	I1212 20:30:17.393285   33042 system_pods.go:61] "kube-proxy-sxw8h" [1f281e87-2597-4bd0-8ca4-cd7556c0a8e4] Running
	I1212 20:30:17.393289   33042 system_pods.go:61] "kube-proxy-xch7v" [c47d9b9f-ae3c-4404-a33a-d689c4b3e034] Running
	I1212 20:30:17.393295   33042 system_pods.go:61] "kube-scheduler-multinode-562818" [994614e5-3a18-422e-86ad-54c67237293d] Running
	I1212 20:30:17.393299   33042 system_pods.go:61] "storage-provisioner" [9efe55ce-d87d-4074-9983-d880908d6d3d] Running
	I1212 20:30:17.393303   33042 system_pods.go:74] duration metric: took 7.707401ms to wait for pod list to return data ...
	I1212 20:30:17.393309   33042 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:30:17.552897   33042 request.go:629] Waited for 159.518423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I1212 20:30:17.552964   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I1212 20:30:17.552970   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.552980   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.552989   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.556330   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:17.556347   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.556356   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.556364   33042 round_trippers.go:580]     Content-Length: 261
	I1212 20:30:17.556371   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.556380   33042 round_trippers.go:580]     Audit-Id: cbb1bb19-b309-4d9d-a402-e2bf1979ee51
	I1212 20:30:17.556393   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.556405   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.556415   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.556448   33042 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"250ddd20-38f2-4339-8143-a461b27c59d0","resourceVersion":"315","creationTimestamp":"2023-12-12T20:19:47Z"}}]}
	I1212 20:30:17.556626   33042 default_sa.go:45] found service account: "default"
	I1212 20:30:17.556649   33042 default_sa.go:55] duration metric: took 163.333401ms for default service account to be created ...
	I1212 20:30:17.556660   33042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:30:17.753234   33042 request.go:629] Waited for 196.489802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:30:17.753323   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:30:17.753335   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.753351   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.753362   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.757509   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:30:17.757543   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.757554   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.757561   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.757569   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.757577   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.757588   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.757598   33042 round_trippers.go:580]     Audit-Id: dd316f0a-7bba-4104-8910-29040198a2cb
	I1212 20:30:17.759668   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I1212 20:30:17.762035   33042 system_pods.go:86] 12 kube-system pods found
	I1212 20:30:17.762060   33042 system_pods.go:89] "coredns-5dd5756b68-689lp" [e77852fc-eb8a-4027-98e1-070b4ca43f54] Running
	I1212 20:30:17.762067   33042 system_pods.go:89] "etcd-multinode-562818" [5a874e4d-12ab-400c-8086-05073ffd1b13] Running
	I1212 20:30:17.762079   33042 system_pods.go:89] "kindnet-24p9c" [e80eb9ab-2919-4be1-890d-34c26202f7fc] Running
	I1212 20:30:17.762088   33042 system_pods.go:89] "kindnet-cmz7d" [b60f3109-0845-483d-81c9-1fe2bbffd622] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:17.762098   33042 system_pods.go:89] "kindnet-q7n6w" [ff09c341-d00a-4983-b169-5c19cf81b490] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:30:17.762106   33042 system_pods.go:89] "kube-apiserver-multinode-562818" [7d766a87-0f52-46ef-b1fb-392a197bca9a] Running
	I1212 20:30:17.762117   33042 system_pods.go:89] "kube-controller-manager-multinode-562818" [23b73a4b-e188-4b7c-a13d-1fd61862a4e1] Running
	I1212 20:30:17.762128   33042 system_pods.go:89] "kube-proxy-4rrmn" [2bcd718f-0c7c-461a-895e-44a0c1d566fd] Running
	I1212 20:30:17.762134   33042 system_pods.go:89] "kube-proxy-sxw8h" [1f281e87-2597-4bd0-8ca4-cd7556c0a8e4] Running
	I1212 20:30:17.762141   33042 system_pods.go:89] "kube-proxy-xch7v" [c47d9b9f-ae3c-4404-a33a-d689c4b3e034] Running
	I1212 20:30:17.762149   33042 system_pods.go:89] "kube-scheduler-multinode-562818" [994614e5-3a18-422e-86ad-54c67237293d] Running
	I1212 20:30:17.762156   33042 system_pods.go:89] "storage-provisioner" [9efe55ce-d87d-4074-9983-d880908d6d3d] Running
	I1212 20:30:17.762166   33042 system_pods.go:126] duration metric: took 205.498554ms to wait for k8s-apps to be running ...
	I1212 20:30:17.762179   33042 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:30:17.762234   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:17.775051   33042 system_svc.go:56] duration metric: took 12.863836ms WaitForService to wait for kubelet.
	I1212 20:30:17.775081   33042 kubeadm.go:581] duration metric: took 12.596190914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:30:17.775100   33042 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:30:17.953529   33042 request.go:629] Waited for 178.352041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I1212 20:30:17.953612   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I1212 20:30:17.953619   33042 round_trippers.go:469] Request Headers:
	I1212 20:30:17.953630   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:30:17.953640   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:30:17.957462   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:30:17.957488   33042 round_trippers.go:577] Response Headers:
	I1212 20:30:17.957500   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:30:17.957514   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:30:17.957523   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:30:17 GMT
	I1212 20:30:17.957535   33042 round_trippers.go:580]     Audit-Id: 536d0075-3094-4cf8-9f70-331b0d9a4b6a
	I1212 20:30:17.957545   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:30:17.957556   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:30:17.958226   33042 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"829","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I1212 20:30:17.958818   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:30:17.958837   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:30:17.958847   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:30:17.958853   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:30:17.958862   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:30:17.958869   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:30:17.958882   33042 node_conditions.go:105] duration metric: took 183.777657ms to run NodePressure ...
	I1212 20:30:17.958892   33042 start.go:228] waiting for startup goroutines ...
	I1212 20:30:17.958899   33042 start.go:233] waiting for cluster config update ...
	I1212 20:30:17.958905   33042 start.go:242] writing updated cluster config ...
	I1212 20:30:17.959371   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:30:17.959477   33042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:30:17.961585   33042 out.go:177] * Starting worker node multinode-562818-m02 in cluster multinode-562818
	I1212 20:30:17.962690   33042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:30:17.962708   33042 cache.go:56] Caching tarball of preloaded images
	I1212 20:30:17.962810   33042 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:30:17.962824   33042 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 20:30:17.962912   33042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:30:17.963098   33042 start.go:365] acquiring machines lock for multinode-562818-m02: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:30:17.963155   33042 start.go:369] acquired machines lock for "multinode-562818-m02" in 35.969µs
	I1212 20:30:17.963175   33042 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:30:17.963184   33042 fix.go:54] fixHost starting: m02
	I1212 20:30:17.963509   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:30:17.963543   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:30:17.977690   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I1212 20:30:17.978082   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:30:17.978523   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:30:17.978550   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:30:17.978845   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:30:17.979005   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:30:17.979220   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetState
	I1212 20:30:17.980803   33042 fix.go:102] recreateIfNeeded on multinode-562818-m02: state=Running err=<nil>
	W1212 20:30:17.980819   33042 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:30:17.982622   33042 out.go:177] * Updating the running kvm2 "multinode-562818-m02" VM ...
	I1212 20:30:17.983732   33042 machine.go:88] provisioning docker machine ...
	I1212 20:30:17.983752   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:30:17.983954   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:30:17.984091   33042 buildroot.go:166] provisioning hostname "multinode-562818-m02"
	I1212 20:30:17.984111   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:30:17.984255   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:30:17.986494   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:17.986949   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:30:17.986977   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:17.987145   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:30:17.987338   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:17.987484   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:17.987641   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:30:17.987807   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:30:17.988120   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:30:17.988135   33042 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-562818-m02 && echo "multinode-562818-m02" | sudo tee /etc/hostname
	I1212 20:30:18.124928   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-562818-m02
	
	I1212 20:30:18.124956   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:30:18.127750   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.128094   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:30:18.128127   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.128314   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:30:18.128460   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:18.128624   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:18.128708   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:30:18.128853   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:30:18.129197   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:30:18.129215   33042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-562818-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-562818-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-562818-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:30:18.252205   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:30:18.252242   33042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:30:18.252263   33042 buildroot.go:174] setting up certificates
	I1212 20:30:18.252280   33042 provision.go:83] configureAuth start
	I1212 20:30:18.252297   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetMachineName
	I1212 20:30:18.252570   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:30:18.255130   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.255590   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:30:18.255616   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.255767   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:30:18.258142   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.258497   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:30:18.258526   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.258675   33042 provision.go:138] copyHostCerts
	I1212 20:30:18.258704   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:30:18.258735   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:30:18.258744   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:30:18.258811   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:30:18.258878   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:30:18.258895   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:30:18.258902   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:30:18.258925   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:30:18.258965   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:30:18.258980   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:30:18.258984   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:30:18.259003   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:30:18.259045   33042 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.multinode-562818-m02 san=[192.168.39.65 192.168.39.65 localhost 127.0.0.1 minikube multinode-562818-m02]
	I1212 20:30:18.412459   33042 provision.go:172] copyRemoteCerts
	I1212 20:30:18.412512   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:30:18.412535   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:30:18.415354   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.415725   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:30:18.415757   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.415909   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:30:18.416074   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:18.416217   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:30:18.416340   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:30:18.506379   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:30:18.506448   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:30:18.531879   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:30:18.531952   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 20:30:18.554051   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:30:18.554110   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:30:18.576436   33042 provision.go:86] duration metric: configureAuth took 324.14108ms
	I1212 20:30:18.576464   33042 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:30:18.576654   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:30:18.576718   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:30:18.579295   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.579594   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:30:18.579621   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:30:18.579833   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:30:18.579987   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:18.580214   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:30:18.580389   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:30:18.580580   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:30:18.580893   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:30:18.580909   33042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:31:49.227108   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:31:49.227140   33042 machine.go:91] provisioned docker machine in 1m31.243392339s
	I1212 20:31:49.227158   33042 start.go:300] post-start starting for "multinode-562818-m02" (driver="kvm2")
	I1212 20:31:49.227169   33042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:31:49.227188   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:31:49.227530   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:31:49.227559   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:31:49.230263   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.230675   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:31:49.230708   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.230921   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:31:49.231132   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:31:49.231320   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:31:49.231483   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:31:49.321472   33042 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:31:49.326493   33042 command_runner.go:130] > NAME=Buildroot
	I1212 20:31:49.326517   33042 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 20:31:49.326522   33042 command_runner.go:130] > ID=buildroot
	I1212 20:31:49.326528   33042 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 20:31:49.326532   33042 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 20:31:49.326563   33042 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:31:49.326576   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:31:49.326653   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:31:49.326721   33042 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:31:49.326730   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /etc/ssl/certs/164562.pem
	I1212 20:31:49.326806   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:31:49.335993   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:31:49.363499   33042 start.go:303] post-start completed in 136.327338ms
	I1212 20:31:49.363522   33042 fix.go:56] fixHost completed within 1m31.400339062s
	I1212 20:31:49.363542   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:31:49.366219   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.366583   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:31:49.366616   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.366782   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:31:49.367008   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:31:49.367174   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:31:49.367337   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:31:49.367485   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:31:49.367788   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 20:31:49.367799   33042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:31:49.492320   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702413109.484247313
	
	I1212 20:31:49.492345   33042 fix.go:206] guest clock: 1702413109.484247313
	I1212 20:31:49.492355   33042 fix.go:219] Guest: 2023-12-12 20:31:49.484247313 +0000 UTC Remote: 2023-12-12 20:31:49.363526294 +0000 UTC m=+452.000577809 (delta=120.721019ms)
	I1212 20:31:49.492374   33042 fix.go:190] guest clock delta is within tolerance: 120.721019ms
	I1212 20:31:49.492388   33042 start.go:83] releasing machines lock for "multinode-562818-m02", held for 1m31.529213522s
	I1212 20:31:49.492424   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:31:49.492678   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:31:49.495143   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.495471   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:31:49.495507   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.498009   33042 out.go:177] * Found network options:
	I1212 20:31:49.499743   33042 out.go:177]   - NO_PROXY=192.168.39.77
	W1212 20:31:49.501149   33042 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 20:31:49.501174   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:31:49.501730   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:31:49.501936   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:31:49.502042   33042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:31:49.502081   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	W1212 20:31:49.502136   33042 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 20:31:49.502217   33042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:31:49.502239   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:31:49.504833   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.504858   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.505242   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:31:49.505274   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.505366   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:31:49.505397   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:49.505399   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:31:49.505571   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:31:49.505580   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:31:49.505743   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:31:49.505750   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:31:49.505936   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:31:49.505942   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:31:49.506085   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:31:49.620343   33042 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:31:49.743662   33042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:31:49.749743   33042 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:31:49.749794   33042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:31:49.749850   33042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:31:49.758210   33042 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:31:49.758239   33042 start.go:475] detecting cgroup driver to use...
	I1212 20:31:49.758312   33042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:31:49.773775   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:31:49.787409   33042 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:31:49.787477   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:31:49.800857   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:31:49.813747   33042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:31:49.956525   33042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:31:50.085173   33042 docker.go:219] disabling docker service ...
	I1212 20:31:50.085241   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:31:50.098658   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:31:50.110993   33042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:31:50.239028   33042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:31:50.364874   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:31:50.378209   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:31:50.394783   33042 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:31:50.395182   33042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 20:31:50.395229   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:31:50.404376   33042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:31:50.404431   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:31:50.413580   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:31:50.423256   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:31:50.432259   33042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:31:50.441401   33042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:31:50.449276   33042 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:31:50.449364   33042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:31:50.457424   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:31:50.592490   33042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:31:50.827309   33042 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:31:50.827386   33042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:31:50.832407   33042 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:31:50.832437   33042 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:31:50.832447   33042 command_runner.go:130] > Device: 16h/22d	Inode: 1283        Links: 1
	I1212 20:31:50.832458   33042 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:31:50.832466   33042 command_runner.go:130] > Access: 2023-12-12 20:31:50.754849207 +0000
	I1212 20:31:50.832475   33042 command_runner.go:130] > Modify: 2023-12-12 20:31:50.754849207 +0000
	I1212 20:31:50.832487   33042 command_runner.go:130] > Change: 2023-12-12 20:31:50.754849207 +0000
	I1212 20:31:50.832498   33042 command_runner.go:130] >  Birth: -
	I1212 20:31:50.832519   33042 start.go:543] Will wait 60s for crictl version
	I1212 20:31:50.832562   33042 ssh_runner.go:195] Run: which crictl
	I1212 20:31:50.836395   33042 command_runner.go:130] > /usr/bin/crictl
	I1212 20:31:50.836570   33042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:31:50.882423   33042 command_runner.go:130] > Version:  0.1.0
	I1212 20:31:50.882449   33042 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:31:50.882456   33042 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 20:31:50.882465   33042 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:31:50.883871   33042 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:31:50.883943   33042 ssh_runner.go:195] Run: crio --version
	I1212 20:31:50.936705   33042 command_runner.go:130] > crio version 1.24.1
	I1212 20:31:50.936732   33042 command_runner.go:130] > Version:          1.24.1
	I1212 20:31:50.936746   33042 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:31:50.936751   33042 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:31:50.936756   33042 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:31:50.936761   33042 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:31:50.936765   33042 command_runner.go:130] > Compiler:         gc
	I1212 20:31:50.936769   33042 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:31:50.936774   33042 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:31:50.936781   33042 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:31:50.936785   33042 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:31:50.936789   33042 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:31:50.938467   33042 ssh_runner.go:195] Run: crio --version
	I1212 20:31:50.985616   33042 command_runner.go:130] > crio version 1.24.1
	I1212 20:31:50.985642   33042 command_runner.go:130] > Version:          1.24.1
	I1212 20:31:50.985650   33042 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:31:50.985654   33042 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:31:50.985660   33042 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:31:50.985665   33042 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:31:50.985670   33042 command_runner.go:130] > Compiler:         gc
	I1212 20:31:50.985678   33042 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:31:50.985692   33042 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:31:50.985708   33042 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:31:50.985716   33042 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:31:50.985722   33042 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:31:50.988119   33042 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 20:31:50.989307   33042 out.go:177]   - env NO_PROXY=192.168.39.77
	I1212 20:31:50.990383   33042 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:31:50.992801   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:50.993107   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:31:50.993130   33042 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:31:50.993339   33042 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:31:50.997831   33042 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 20:31:50.998049   33042 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818 for IP: 192.168.39.65
	I1212 20:31:50.998079   33042 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:31:50.998220   33042 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:31:50.998284   33042 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:31:50.998300   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:31:50.998321   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:31:50.998339   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:31:50.998357   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:31:50.998424   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:31:50.998463   33042 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:31:50.998478   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:31:50.998513   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:31:50.998547   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:31:50.998580   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:31:50.998637   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:31:50.998676   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem -> /usr/share/ca-certificates/16456.pem
	I1212 20:31:50.998695   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /usr/share/ca-certificates/164562.pem
	I1212 20:31:50.998716   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:31:50.999059   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:31:51.023316   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:31:51.049591   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:31:51.075138   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:31:51.100508   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:31:51.124832   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:31:51.148446   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:31:51.172762   33042 ssh_runner.go:195] Run: openssl version
	I1212 20:31:51.178593   33042 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 20:31:51.178772   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:31:51.189154   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:31:51.193944   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:31:51.194061   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:31:51.194104   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:31:51.199627   33042 command_runner.go:130] > 51391683
	I1212 20:31:51.199706   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:31:51.208585   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:31:51.218630   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:31:51.223592   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:31:51.223613   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:31:51.223652   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:31:51.230464   33042 command_runner.go:130] > 3ec20f2e
	I1212 20:31:51.230530   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:31:51.239218   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:31:51.251068   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:31:51.255945   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:31:51.256198   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:31:51.256254   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:31:51.262425   33042 command_runner.go:130] > b5213941
	I1212 20:31:51.262702   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:31:51.272980   33042 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:31:51.277309   33042 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:31:51.277418   33042 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:31:51.277510   33042 ssh_runner.go:195] Run: crio config
	I1212 20:31:51.325390   33042 command_runner.go:130] ! time="2023-12-12 20:31:51.317576986Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 20:31:51.325452   33042 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:31:51.337708   33042 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:31:51.337739   33042 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:31:51.337759   33042 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:31:51.337766   33042 command_runner.go:130] > #
	I1212 20:31:51.337778   33042 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:31:51.337787   33042 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:31:51.337796   33042 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:31:51.337807   33042 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:31:51.337813   33042 command_runner.go:130] > # reload'.
	I1212 20:31:51.337825   33042 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:31:51.337839   33042 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:31:51.337849   33042 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:31:51.337863   33042 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:31:51.337873   33042 command_runner.go:130] > [crio]
	I1212 20:31:51.337883   33042 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:31:51.337893   33042 command_runner.go:130] > # containers images, in this directory.
	I1212 20:31:51.337902   33042 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 20:31:51.337916   33042 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:31:51.337926   33042 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 20:31:51.337934   33042 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:31:51.337945   33042 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:31:51.337953   33042 command_runner.go:130] > storage_driver = "overlay"
	I1212 20:31:51.337965   33042 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:31:51.337977   33042 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:31:51.337986   33042 command_runner.go:130] > storage_option = [
	I1212 20:31:51.337994   33042 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 20:31:51.338004   33042 command_runner.go:130] > ]
	I1212 20:31:51.338017   33042 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:31:51.338028   33042 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:31:51.338038   33042 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:31:51.338050   33042 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:31:51.338061   33042 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:31:51.338071   33042 command_runner.go:130] > # always happen on a node reboot
	I1212 20:31:51.338081   33042 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:31:51.338092   33042 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:31:51.338105   33042 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:31:51.338120   33042 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:31:51.338131   33042 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 20:31:51.338146   33042 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:31:51.338162   33042 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:31:51.338172   33042 command_runner.go:130] > # internal_wipe = true
	I1212 20:31:51.338184   33042 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:31:51.338197   33042 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:31:51.338206   33042 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:31:51.338211   33042 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:31:51.338220   33042 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:31:51.338226   33042 command_runner.go:130] > [crio.api]
	I1212 20:31:51.338232   33042 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:31:51.338239   33042 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:31:51.338245   33042 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:31:51.338251   33042 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:31:51.338258   33042 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:31:51.338265   33042 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:31:51.338272   33042 command_runner.go:130] > # stream_port = "0"
	I1212 20:31:51.338278   33042 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:31:51.338285   33042 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:31:51.338291   33042 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:31:51.338298   33042 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:31:51.338305   33042 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:31:51.338313   33042 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 20:31:51.338319   33042 command_runner.go:130] > # minutes.
	I1212 20:31:51.338323   33042 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:31:51.338332   33042 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:31:51.338340   33042 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 20:31:51.338346   33042 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:31:51.338352   33042 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:31:51.338361   33042 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:31:51.338369   33042 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 20:31:51.338373   33042 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:31:51.338383   33042 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:31:51.338389   33042 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 20:31:51.338396   33042 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:31:51.338403   33042 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 20:31:51.338425   33042 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:31:51.338437   33042 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:31:51.338447   33042 command_runner.go:130] > [crio.runtime]
	I1212 20:31:51.338460   33042 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:31:51.338472   33042 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:31:51.338481   33042 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:31:51.338494   33042 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:31:51.338503   33042 command_runner.go:130] > # default_ulimits = [
	I1212 20:31:51.338511   33042 command_runner.go:130] > # ]
	I1212 20:31:51.338521   33042 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:31:51.338528   33042 command_runner.go:130] > # no_pivot = false
	I1212 20:31:51.338535   33042 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:31:51.338545   33042 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:31:51.338550   33042 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:31:51.338559   33042 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:31:51.338567   33042 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:31:51.338576   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:31:51.338585   33042 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 20:31:51.338591   33042 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:31:51.338598   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:31:51.338611   33042 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:31:51.338620   33042 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:31:51.338627   33042 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:31:51.338641   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:31:51.338647   33042 command_runner.go:130] > conmon_env = [
	I1212 20:31:51.338654   33042 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 20:31:51.338660   33042 command_runner.go:130] > ]
	I1212 20:31:51.338665   33042 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:31:51.338673   33042 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:31:51.338681   33042 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:31:51.338687   33042 command_runner.go:130] > # default_env = [
	I1212 20:31:51.338691   33042 command_runner.go:130] > # ]
	I1212 20:31:51.338699   33042 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:31:51.338705   33042 command_runner.go:130] > # selinux = false
	I1212 20:31:51.338711   33042 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:31:51.338720   33042 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 20:31:51.338726   33042 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 20:31:51.338732   33042 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:31:51.338738   33042 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 20:31:51.338744   33042 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 20:31:51.338758   33042 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 20:31:51.338765   33042 command_runner.go:130] > # which might increase security.
	I1212 20:31:51.338770   33042 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 20:31:51.338778   33042 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:31:51.338786   33042 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:31:51.338795   33042 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:31:51.338801   33042 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:31:51.338808   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:31:51.338812   33042 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:31:51.338818   33042 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:31:51.338823   33042 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:31:51.338828   33042 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:31:51.338836   33042 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:31:51.338840   33042 command_runner.go:130] > # irqbalance daemon.
	I1212 20:31:51.338848   33042 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:31:51.338854   33042 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:31:51.338880   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:31:51.338887   33042 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:31:51.338892   33042 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:31:51.338896   33042 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:31:51.338905   33042 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:31:51.338909   33042 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:31:51.338917   33042 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:31:51.338925   33042 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:31:51.338934   33042 command_runner.go:130] > # will be added.
	I1212 20:31:51.338938   33042 command_runner.go:130] > # default_capabilities = [
	I1212 20:31:51.338944   33042 command_runner.go:130] > # 	"CHOWN",
	I1212 20:31:51.338949   33042 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:31:51.338956   33042 command_runner.go:130] > # 	"FSETID",
	I1212 20:31:51.338960   33042 command_runner.go:130] > # 	"FOWNER",
	I1212 20:31:51.338966   33042 command_runner.go:130] > # 	"SETGID",
	I1212 20:31:51.338970   33042 command_runner.go:130] > # 	"SETUID",
	I1212 20:31:51.338976   33042 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:31:51.338980   33042 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:31:51.338986   33042 command_runner.go:130] > # 	"KILL",
	I1212 20:31:51.338992   33042 command_runner.go:130] > # ]
	I1212 20:31:51.338998   33042 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:31:51.339006   33042 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:31:51.339014   33042 command_runner.go:130] > # default_sysctls = [
	I1212 20:31:51.339017   33042 command_runner.go:130] > # ]
	I1212 20:31:51.339025   33042 command_runner.go:130] > # List of devices on the host that a
	I1212 20:31:51.339034   33042 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:31:51.339040   33042 command_runner.go:130] > # allowed_devices = [
	I1212 20:31:51.339044   33042 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:31:51.339050   33042 command_runner.go:130] > # ]
	I1212 20:31:51.339055   33042 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:31:51.339065   33042 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:31:51.339070   33042 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:31:51.339121   33042 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:31:51.339133   33042 command_runner.go:130] > # additional_devices = [
	I1212 20:31:51.339137   33042 command_runner.go:130] > # ]
	I1212 20:31:51.339142   33042 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:31:51.339147   33042 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:31:51.339153   33042 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:31:51.339158   33042 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:31:51.339164   33042 command_runner.go:130] > # ]
	I1212 20:31:51.339170   33042 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:31:51.339179   33042 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:31:51.339186   33042 command_runner.go:130] > # Defaults to false.
	I1212 20:31:51.339191   33042 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:31:51.339199   33042 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:31:51.339207   33042 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:31:51.339214   33042 command_runner.go:130] > # hooks_dir = [
	I1212 20:31:51.339218   33042 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:31:51.339224   33042 command_runner.go:130] > # ]
	I1212 20:31:51.339230   33042 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:31:51.339251   33042 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:31:51.339264   33042 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:31:51.339270   33042 command_runner.go:130] > #
	I1212 20:31:51.339283   33042 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:31:51.339292   33042 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:31:51.339300   33042 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:31:51.339306   33042 command_runner.go:130] > #
	I1212 20:31:51.339312   33042 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:31:51.339321   33042 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:31:51.339330   33042 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:31:51.339337   33042 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:31:51.339340   33042 command_runner.go:130] > #
	I1212 20:31:51.339345   33042 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:31:51.339354   33042 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:31:51.339362   33042 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:31:51.339368   33042 command_runner.go:130] > pids_limit = 1024
	I1212 20:31:51.339374   33042 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:31:51.339382   33042 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:31:51.339391   33042 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:31:51.339399   33042 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:31:51.339406   33042 command_runner.go:130] > # log_size_max = -1
	I1212 20:31:51.339417   33042 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 20:31:51.339431   33042 command_runner.go:130] > # log_to_journald = false
	I1212 20:31:51.339444   33042 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:31:51.339454   33042 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:31:51.339465   33042 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:31:51.339476   33042 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:31:51.339488   33042 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:31:51.339497   33042 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:31:51.339510   33042 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:31:51.339517   33042 command_runner.go:130] > # read_only = false
	I1212 20:31:51.339526   33042 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:31:51.339533   33042 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:31:51.339540   33042 command_runner.go:130] > # live configuration reload.
	I1212 20:31:51.339544   33042 command_runner.go:130] > # log_level = "info"
	I1212 20:31:51.339552   33042 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:31:51.339560   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:31:51.339565   33042 command_runner.go:130] > # log_filter = ""
	I1212 20:31:51.339574   33042 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:31:51.339581   33042 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:31:51.339589   33042 command_runner.go:130] > # separated by comma.
	I1212 20:31:51.339596   33042 command_runner.go:130] > # uid_mappings = ""
	I1212 20:31:51.339602   33042 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:31:51.339611   33042 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:31:51.339617   33042 command_runner.go:130] > # separated by comma.
	I1212 20:31:51.339622   33042 command_runner.go:130] > # gid_mappings = ""
	I1212 20:31:51.339630   33042 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:31:51.339638   33042 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:31:51.339646   33042 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:31:51.339651   33042 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:31:51.339659   33042 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:31:51.339668   33042 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:31:51.339676   33042 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:31:51.339683   33042 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:31:51.339688   33042 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:31:51.339696   33042 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:31:51.339704   33042 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:31:51.339711   33042 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:31:51.339719   33042 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:31:51.339728   33042 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:31:51.339733   33042 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:31:51.339740   33042 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:31:51.339752   33042 command_runner.go:130] > drop_infra_ctr = false
	I1212 20:31:51.339760   33042 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:31:51.339767   33042 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:31:51.339777   33042 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:31:51.339783   33042 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:31:51.339791   33042 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:31:51.339798   33042 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:31:51.339803   33042 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:31:51.339812   33042 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:31:51.339816   33042 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 20:31:51.339822   33042 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:31:51.339831   33042 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 20:31:51.339837   33042 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 20:31:51.339844   33042 command_runner.go:130] > # default_runtime = "runc"
	I1212 20:31:51.339852   33042 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:31:51.339861   33042 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:31:51.339870   33042 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 20:31:51.339878   33042 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:31:51.339886   33042 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:31:51.339893   33042 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:31:51.339898   33042 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:31:51.339904   33042 command_runner.go:130] > # ]
	I1212 20:31:51.339910   33042 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:31:51.339919   33042 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:31:51.339928   33042 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 20:31:51.339934   33042 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 20:31:51.339940   33042 command_runner.go:130] > #
	I1212 20:31:51.339945   33042 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 20:31:51.339952   33042 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 20:31:51.339956   33042 command_runner.go:130] > #  runtime_type = "oci"
	I1212 20:31:51.339963   33042 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 20:31:51.339968   33042 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 20:31:51.339976   33042 command_runner.go:130] > #  allowed_annotations = []
	I1212 20:31:51.339981   33042 command_runner.go:130] > # Where:
	I1212 20:31:51.339987   33042 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 20:31:51.339995   33042 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 20:31:51.340004   33042 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:31:51.340012   33042 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:31:51.340018   33042 command_runner.go:130] > #   in $PATH.
	I1212 20:31:51.340024   33042 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 20:31:51.340031   33042 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:31:51.340038   33042 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 20:31:51.340044   33042 command_runner.go:130] > #   state.
	I1212 20:31:51.340050   33042 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:31:51.340058   33042 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:31:51.340064   33042 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:31:51.340072   33042 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:31:51.340080   33042 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:31:51.340089   33042 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:31:51.340097   33042 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:31:51.340105   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:31:51.340115   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:31:51.340123   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:31:51.340131   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:31:51.340139   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:31:51.340148   33042 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:31:51.340154   33042 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:31:51.340163   33042 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 20:31:51.340170   33042 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:31:51.340174   33042 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:31:51.340181   33042 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 20:31:51.340185   33042 command_runner.go:130] > runtime_type = "oci"
	I1212 20:31:51.340196   33042 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:31:51.340205   33042 command_runner.go:130] > runtime_config_path = ""
	I1212 20:31:51.340215   33042 command_runner.go:130] > monitor_path = ""
	I1212 20:31:51.340225   33042 command_runner.go:130] > monitor_cgroup = ""
	I1212 20:31:51.340234   33042 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:31:51.340244   33042 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 20:31:51.340259   33042 command_runner.go:130] > # running containers
	I1212 20:31:51.340269   33042 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 20:31:51.340281   33042 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 20:31:51.340316   33042 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 20:31:51.340325   33042 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 20:31:51.340331   33042 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 20:31:51.340338   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 20:31:51.340342   33042 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 20:31:51.340349   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 20:31:51.340354   33042 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 20:31:51.340361   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 20:31:51.340367   33042 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:31:51.340375   33042 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:31:51.340384   33042 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:31:51.340393   33042 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:31:51.340403   33042 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 20:31:51.340414   33042 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:31:51.340432   33042 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:31:51.340450   33042 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:31:51.340462   33042 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:31:51.340476   33042 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:31:51.340485   33042 command_runner.go:130] > # Example:
	I1212 20:31:51.340495   33042 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:31:51.340503   33042 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:31:51.340514   33042 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:31:51.340526   33042 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:31:51.340535   33042 command_runner.go:130] > # cpuset = 0
	I1212 20:31:51.340544   33042 command_runner.go:130] > # cpushares = "0-1"
	I1212 20:31:51.340553   33042 command_runner.go:130] > # Where:
	I1212 20:31:51.340561   33042 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:31:51.340573   33042 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:31:51.340581   33042 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:31:51.340586   33042 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:31:51.340596   33042 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:31:51.340605   33042 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:31:51.340611   33042 command_runner.go:130] > # 
	I1212 20:31:51.340623   33042 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:31:51.340631   33042 command_runner.go:130] > #
	I1212 20:31:51.340644   33042 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:31:51.340656   33042 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 20:31:51.340670   33042 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 20:31:51.340683   33042 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 20:31:51.340695   33042 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 20:31:51.340705   33042 command_runner.go:130] > [crio.image]
	I1212 20:31:51.340716   33042 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:31:51.340727   33042 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:31:51.340740   33042 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:31:51.340754   33042 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:31:51.340761   33042 command_runner.go:130] > # global_auth_file = ""
	I1212 20:31:51.340767   33042 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:31:51.340774   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:31:51.340779   33042 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 20:31:51.340788   33042 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:31:51.340795   33042 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:31:51.340801   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:31:51.340806   33042 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:31:51.340814   33042 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:31:51.340820   33042 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:31:51.340828   33042 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:31:51.340835   33042 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:31:51.340842   33042 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:31:51.340848   33042 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:31:51.340857   33042 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:31:51.340863   33042 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:31:51.340871   33042 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:31:51.340877   33042 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:31:51.340884   33042 command_runner.go:130] > # signature_policy = ""
	I1212 20:31:51.340889   33042 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:31:51.340898   33042 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:31:51.340903   33042 command_runner.go:130] > # changing them here.
	I1212 20:31:51.340910   33042 command_runner.go:130] > # insecure_registries = [
	I1212 20:31:51.340913   33042 command_runner.go:130] > # ]
	I1212 20:31:51.340923   33042 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:31:51.340931   33042 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:31:51.340935   33042 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:31:51.340943   33042 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:31:51.340947   33042 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:31:51.340955   33042 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:31:51.340961   33042 command_runner.go:130] > # CNI plugins.
	I1212 20:31:51.340965   33042 command_runner.go:130] > [crio.network]
	I1212 20:31:51.340974   33042 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:31:51.340982   33042 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:31:51.340986   33042 command_runner.go:130] > # cni_default_network = ""
	I1212 20:31:51.340994   33042 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:31:51.341001   33042 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:31:51.341008   33042 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:31:51.341017   33042 command_runner.go:130] > # plugin_dirs = [
	I1212 20:31:51.341027   33042 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:31:51.341035   33042 command_runner.go:130] > # ]
	I1212 20:31:51.341047   33042 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:31:51.341061   33042 command_runner.go:130] > [crio.metrics]
	I1212 20:31:51.341072   33042 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:31:51.341082   33042 command_runner.go:130] > enable_metrics = true
	I1212 20:31:51.341092   33042 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:31:51.341100   33042 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:31:51.341112   33042 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:31:51.341121   33042 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:31:51.341129   33042 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:31:51.341136   33042 command_runner.go:130] > # metrics_collectors = [
	I1212 20:31:51.341140   33042 command_runner.go:130] > # 	"operations",
	I1212 20:31:51.341147   33042 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 20:31:51.341151   33042 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 20:31:51.341156   33042 command_runner.go:130] > # 	"operations_errors",
	I1212 20:31:51.341161   33042 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 20:31:51.341167   33042 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 20:31:51.341172   33042 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 20:31:51.341179   33042 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 20:31:51.341183   33042 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 20:31:51.341190   33042 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:31:51.341197   33042 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 20:31:51.341202   33042 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:31:51.341211   33042 command_runner.go:130] > # 	"containers_oom",
	I1212 20:31:51.341215   33042 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:31:51.341221   33042 command_runner.go:130] > # 	"operations_total",
	I1212 20:31:51.341225   33042 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:31:51.341232   33042 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:31:51.341236   33042 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:31:51.341241   33042 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:31:51.341246   33042 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:31:51.341253   33042 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:31:51.341257   33042 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:31:51.341264   33042 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:31:51.341269   33042 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:31:51.341275   33042 command_runner.go:130] > # ]
	I1212 20:31:51.341280   33042 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:31:51.341286   33042 command_runner.go:130] > # metrics_port = 9090
	I1212 20:31:51.341292   33042 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:31:51.341298   33042 command_runner.go:130] > # metrics_socket = ""
	I1212 20:31:51.341304   33042 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:31:51.341312   33042 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:31:51.341322   33042 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:31:51.341332   33042 command_runner.go:130] > # certificate on any modification event.
	I1212 20:31:51.341343   33042 command_runner.go:130] > # metrics_cert = ""
	I1212 20:31:51.341354   33042 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:31:51.341365   33042 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:31:51.341374   33042 command_runner.go:130] > # metrics_key = ""
	I1212 20:31:51.341386   33042 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:31:51.341395   33042 command_runner.go:130] > [crio.tracing]
	I1212 20:31:51.341407   33042 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:31:51.341416   33042 command_runner.go:130] > # enable_tracing = false
	I1212 20:31:51.341428   33042 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:31:51.341438   33042 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 20:31:51.341450   33042 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 20:31:51.341461   33042 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:31:51.341476   33042 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:31:51.341485   33042 command_runner.go:130] > [crio.stats]
	I1212 20:31:51.341494   33042 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:31:51.341501   33042 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:31:51.341508   33042 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:31:51.341568   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:31:51.341577   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:31:51.341586   33042 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:31:51.341603   33042 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-562818 NodeName:multinode-562818-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:31:51.341704   33042 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-562818-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:31:51.341756   33042 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-562818-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:31:51.341804   33042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 20:31:51.351104   33042 command_runner.go:130] > kubeadm
	I1212 20:31:51.351122   33042 command_runner.go:130] > kubectl
	I1212 20:31:51.351126   33042 command_runner.go:130] > kubelet
	I1212 20:31:51.351299   33042 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 20:31:51.351371   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 20:31:51.360290   33042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 20:31:51.377448   33042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:31:51.394656   33042 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I1212 20:31:51.398674   33042 command_runner.go:130] > 192.168.39.77	control-plane.minikube.internal
	I1212 20:31:51.398755   33042 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:31:51.399014   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:31:51.399148   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:31:51.399183   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:31:51.413429   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1212 20:31:51.413854   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:31:51.414264   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:31:51.414286   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:31:51.414629   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:31:51.414811   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:31:51.414956   33042 start.go:304] JoinCluster: &{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:31:51.415095   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 20:31:51.415116   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:31:51.417710   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:31:51.418141   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:31:51.418184   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:31:51.418289   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:31:51.418448   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:31:51.418592   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:31:51.418742   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:31:51.616649   33042 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rjwcgh.xcm5607lkt0wqppw --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 20:31:51.616871   33042 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:31:51.616907   33042 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:31:51.617213   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:31:51.617250   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:31:51.631914   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1212 20:31:51.632327   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:31:51.632787   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:31:51.632818   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:31:51.633157   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:31:51.633335   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:31:51.633507   33042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-562818-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1212 20:31:51.633533   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:31:51.636329   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:31:51.636734   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:31:51.636764   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:31:51.636835   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:31:51.637009   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:31:51.637175   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:31:51.637310   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:31:51.862357   33042 command_runner.go:130] > node/multinode-562818-m02 cordoned
	I1212 20:31:54.909760   33042 command_runner.go:130] > pod "busybox-5bc68d56bd-vbpn5" has DeletionTimestamp older than 1 seconds, skipping
	I1212 20:31:54.909791   33042 command_runner.go:130] > node/multinode-562818-m02 drained
	I1212 20:31:54.911346   33042 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1212 20:31:54.911374   33042 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cmz7d, kube-system/kube-proxy-sxw8h
	I1212 20:31:54.911401   33042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-562818-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.277866905s)
	I1212 20:31:54.911417   33042 node.go:108] successfully drained node "m02"
	I1212 20:31:54.911813   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:31:54.912069   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:31:54.912428   33042 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1212 20:31:54.912481   33042 round_trippers.go:463] DELETE https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:31:54.912492   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:54.912502   33042 round_trippers.go:473]     Content-Type: application/json
	I1212 20:31:54.912515   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:54.912527   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:54.932388   33042 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1212 20:31:54.932414   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:54.932424   33042 round_trippers.go:580]     Audit-Id: 111dc2a5-ba5d-45af-bf2e-6941a588b474
	I1212 20:31:54.932431   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:54.932439   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:54.932446   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:54.932455   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:54.932464   33042 round_trippers.go:580]     Content-Length: 171
	I1212 20:31:54.932472   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:54 GMT
	I1212 20:31:54.932502   33042 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-562818-m02","kind":"nodes","uid":"fb1a62c9-0937-4b46-bc61-1969547d5fc0"}}
	I1212 20:31:54.932539   33042 node.go:124] successfully deleted node "m02"
	I1212 20:31:54.932551   33042 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:31:54.932574   33042 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:31:54.932597   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rjwcgh.xcm5607lkt0wqppw --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-562818-m02"
	I1212 20:31:55.004601   33042 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 20:31:55.149188   33042 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 20:31:55.149224   33042 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 20:31:55.212346   33042 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:31:55.212463   33042 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:31:55.212601   33042 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 20:31:55.364477   33042 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 20:31:55.888492   33042 command_runner.go:130] > This node has joined the cluster:
	I1212 20:31:55.888531   33042 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 20:31:55.888542   33042 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 20:31:55.888552   33042 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 20:31:55.891721   33042 command_runner.go:130] ! W1212 20:31:54.996483    2808 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 20:31:55.891754   33042 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1212 20:31:55.891767   33042 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1212 20:31:55.891781   33042 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1212 20:31:55.891816   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 20:31:56.160043   33042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=multinode-562818 minikube.k8s.io/updated_at=2023_12_12T20_31_56_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:31:56.283570   33042 command_runner.go:130] > node/multinode-562818-m02 labeled
	I1212 20:31:56.296711   33042 command_runner.go:130] > node/multinode-562818-m03 labeled
	I1212 20:31:56.299680   33042 start.go:306] JoinCluster complete in 4.884722008s
	I1212 20:31:56.299703   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:31:56.299710   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:31:56.299762   33042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:31:56.305948   33042 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 20:31:56.305973   33042 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 20:31:56.305986   33042 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 20:31:56.305999   33042 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:31:56.306013   33042 command_runner.go:130] > Access: 2023-12-12 20:29:28.351313795 +0000
	I1212 20:31:56.306024   33042 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 20:31:56.306034   33042 command_runner.go:130] > Change: 2023-12-12 20:29:26.512313795 +0000
	I1212 20:31:56.306043   33042 command_runner.go:130] >  Birth: -
	I1212 20:31:56.306089   33042 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 20:31:56.306102   33042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 20:31:56.326406   33042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:31:56.685037   33042 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:31:56.690615   33042 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:31:56.693617   33042 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 20:31:56.704578   33042 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 20:31:56.707618   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:31:56.707831   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:31:56.708087   33042 round_trippers.go:463] GET https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:31:56.708100   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.708107   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.708113   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.710632   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.710653   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.710663   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.710671   33042 round_trippers.go:580]     Audit-Id: 2c173dc8-41c1-4330-a099-b7971410dcf0
	I1212 20:31:56.710679   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.710686   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.710694   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.710704   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.710716   33042 round_trippers.go:580]     Content-Length: 291
	I1212 20:31:56.710739   33042 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"849","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 20:31:56.710833   33042 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-562818" context rescaled to 1 replicas
	I1212 20:31:56.710859   33042 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 20:31:56.712915   33042 out.go:177] * Verifying Kubernetes components...
	I1212 20:31:56.714382   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:31:56.727936   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:31:56.728179   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:31:56.728435   33042 node_ready.go:35] waiting up to 6m0s for node "multinode-562818-m02" to be "Ready" ...
	I1212 20:31:56.728499   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:31:56.728507   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.728514   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.728521   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.731409   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.731435   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.731445   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.731453   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.731462   33042 round_trippers.go:580]     Audit-Id: 0e6947fe-a794-4026-b395-2827e6e0624c
	I1212 20:31:56.731476   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.731488   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.731501   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.731954   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"376c7e88-3106-4db4-9914-b7b057a0ebe7","resourceVersion":"1020","creationTimestamp":"2023-12-12T20:31:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_31_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:31:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 20:31:56.732226   33042 node_ready.go:49] node "multinode-562818-m02" has status "Ready":"True"
	I1212 20:31:56.732245   33042 node_ready.go:38] duration metric: took 3.791956ms waiting for node "multinode-562818-m02" to be "Ready" ...
	I1212 20:31:56.732254   33042 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:31:56.732302   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:31:56.732312   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.732319   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.732325   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.736116   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:31:56.736136   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.736143   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.736148   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.736153   33042 round_trippers.go:580]     Audit-Id: 88f3205b-201a-426a-8c30-e854ea073060
	I1212 20:31:56.736158   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.736163   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.736168   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.737450   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1025"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82041 chars]
	I1212 20:31:56.739804   33042 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.739869   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:31:56.739881   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.739891   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.739903   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.742237   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.742256   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.742265   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.742273   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.742281   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.742291   33042 round_trippers.go:580]     Audit-Id: 7c84cca8-9f28-4e04-bd1b-097c928496db
	I1212 20:31:56.742306   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.742314   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.742560   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 20:31:56.742974   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:56.742990   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.743002   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.743011   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.745290   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.745308   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.745318   33042 round_trippers.go:580]     Audit-Id: baba6053-2e28-417b-88a0-c49f4a2eb6c2
	I1212 20:31:56.745326   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.745332   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.745338   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.745347   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.745352   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.745699   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:31:56.746010   33042 pod_ready.go:92] pod "coredns-5dd5756b68-689lp" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:56.746032   33042 pod_ready.go:81] duration metric: took 6.204755ms waiting for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.746044   33042 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.746093   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-562818
	I1212 20:31:56.746100   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.746107   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.746113   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.748629   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.748649   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.748659   33042 round_trippers.go:580]     Audit-Id: 724b9247-5d3f-4d4c-8499-fe7ba4a362e6
	I1212 20:31:56.748667   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.748675   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.748683   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.748691   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.748703   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.748809   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"831","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 20:31:56.749221   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:56.749235   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.749242   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.749248   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.751352   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.751367   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.751373   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.751379   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.751386   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.751398   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.751410   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.751422   33042 round_trippers.go:580]     Audit-Id: 83c03d61-8b9c-41d0-b8d4-2a86a46b647b
	I1212 20:31:56.751625   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:31:56.751971   33042 pod_ready.go:92] pod "etcd-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:56.751986   33042 pod_ready.go:81] duration metric: took 5.92955ms waiting for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.752001   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.752045   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:31:56.752052   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.752059   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.752067   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.753997   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:31:56.754013   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.754020   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.754035   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.754047   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.754056   33042 round_trippers.go:580]     Audit-Id: 5d06ecd7-1cd4-41e1-8b7f-d168bd0e5bce
	I1212 20:31:56.754068   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.754081   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.754258   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"857","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 20:31:56.754638   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:56.754652   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.754659   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.754666   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.756717   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.756732   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.756738   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.756743   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.756757   33042 round_trippers.go:580]     Audit-Id: 33b3bf14-1c08-4414-b78c-9e15c4a4a646
	I1212 20:31:56.756771   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.756778   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.756787   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.756985   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:31:56.757252   33042 pod_ready.go:92] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:56.757266   33042 pod_ready.go:81] duration metric: took 5.255863ms waiting for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.757274   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.757316   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-562818
	I1212 20:31:56.757324   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.757331   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.757337   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.760300   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.760319   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.760328   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.760336   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.760344   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.760352   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.760359   33042 round_trippers.go:580]     Audit-Id: 4b1dc4e8-ea81-4c9a-803a-facdfbde869c
	I1212 20:31:56.760367   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.760904   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-562818","namespace":"kube-system","uid":"23b73a4b-e188-4b7c-a13d-1fd61862a4e1","resourceVersion":"846","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.mirror":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.seen":"2023-12-12T20:19:35.712598374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 20:31:56.761350   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:56.761370   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.761379   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.761386   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.766477   33042 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 20:31:56.766504   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.766511   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.766517   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.766522   33042 round_trippers.go:580]     Audit-Id: d67622e7-a6a1-4fa9-8cbe-948a78d1245d
	I1212 20:31:56.766528   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.766533   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.766538   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.766688   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:31:56.766979   33042 pod_ready.go:92] pod "kube-controller-manager-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:56.766992   33042 pod_ready.go:81] duration metric: took 9.712304ms waiting for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.767002   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:56.929281   33042 request.go:629] Waited for 162.223018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:31:56.929346   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:31:56.929354   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:56.929364   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:56.929374   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:56.932208   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:56.932232   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:56.932242   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:56.932250   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:56.932258   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:56.932267   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:56 GMT
	I1212 20:31:56.932275   33042 round_trippers.go:580]     Audit-Id: a0b9319c-2df0-41cd-b1c5-6ab07db737b5
	I1212 20:31:56.932283   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:56.932484   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4rrmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"2bcd718f-0c7c-461a-895e-44a0c1d566fd","resourceVersion":"816","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 20:31:57.129272   33042 request.go:629] Waited for 196.363083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:57.129365   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:57.129373   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:57.129385   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:57.129395   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:57.131847   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:57.131863   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:57.131870   33042 round_trippers.go:580]     Audit-Id: f0e1f603-22c0-43ba-ba6b-bc5c1e0636e9
	I1212 20:31:57.131875   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:57.131880   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:57.131889   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:57.131894   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:57.131899   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:57 GMT
	I1212 20:31:57.132081   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:31:57.132393   33042 pod_ready.go:92] pod "kube-proxy-4rrmn" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:57.132408   33042 pod_ready.go:81] duration metric: took 365.400676ms waiting for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:57.132416   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:57.328925   33042 request.go:629] Waited for 196.4336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:31:57.329003   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:31:57.329015   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:57.329026   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:57.329036   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:57.332574   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:31:57.332603   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:57.332614   33042 round_trippers.go:580]     Audit-Id: 45acd402-2b44-4ef7-aa62-7309c19ee5db
	I1212 20:31:57.332622   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:57.332630   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:57.332638   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:57.332647   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:57.332656   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:57 GMT
	I1212 20:31:57.332849   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sxw8h","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f281e87-2597-4bd0-8ca4-cd7556c0a8e4","resourceVersion":"992","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1212 20:31:57.528680   33042 request.go:629] Waited for 195.300089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:31:57.528744   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:31:57.528751   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:57.528763   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:57.528772   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:57.531811   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:31:57.531834   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:57.531840   33042 round_trippers.go:580]     Audit-Id: 911f2e5c-2206-48fb-839f-86b1dbd72028
	I1212 20:31:57.531847   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:57.531852   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:57.531857   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:57.531862   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:57.531867   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:57 GMT
	I1212 20:31:57.532010   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"376c7e88-3106-4db4-9914-b7b057a0ebe7","resourceVersion":"1020","creationTimestamp":"2023-12-12T20:31:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_31_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:31:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 20:31:57.532303   33042 pod_ready.go:92] pod "kube-proxy-sxw8h" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:57.532320   33042 pod_ready.go:81] duration metric: took 399.892994ms waiting for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:57.532329   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:57.728813   33042 request.go:629] Waited for 196.429349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:31:57.728884   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:31:57.728895   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:57.728904   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:57.728912   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:57.731692   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:57.731712   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:57.731719   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:57.731729   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:57.731734   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:57.731739   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:57.731744   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:57 GMT
	I1212 20:31:57.731750   33042 round_trippers.go:580]     Audit-Id: 5304f267-34ce-4ef4-b1b9-a86e1ef4739d
	I1212 20:31:57.732036   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xch7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"c47d9b9f-ae3c-4404-a33a-d689c4b3e034","resourceVersion":"686","creationTimestamp":"2023-12-12T20:21:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:21:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 20:31:57.928758   33042 request.go:629] Waited for 196.303025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:31:57.928820   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:31:57.928825   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:57.928832   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:57.928838   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:57.931499   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:31:57.931525   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:57.931536   33042 round_trippers.go:580]     Audit-Id: dfa4b2fe-992c-4605-863c-ffc4a8f30c93
	I1212 20:31:57.931544   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:57.931553   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:57.931560   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:57.931569   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:57.931579   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:57 GMT
	I1212 20:31:57.931760   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m03","uid":"86ea80af-5628-4573-839f-f5590d741ec8","resourceVersion":"1021","creationTimestamp":"2023-12-12T20:22:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_31_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:22:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I1212 20:31:57.932099   33042 pod_ready.go:92] pod "kube-proxy-xch7v" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:57.932120   33042 pod_ready.go:81] duration metric: took 399.783717ms waiting for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:57.932134   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:58.129586   33042 request.go:629] Waited for 197.341816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:31:58.129639   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:31:58.129644   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:58.129652   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:58.129661   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:58.134652   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:31:58.134673   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:58.134679   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:58.134685   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:58.134690   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:58.134694   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:58 GMT
	I1212 20:31:58.134700   33042 round_trippers.go:580]     Audit-Id: 338690a6-2758-480d-91a3-8b62aa919c6f
	I1212 20:31:58.134704   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:58.134854   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"859","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 20:31:58.329545   33042 request.go:629] Waited for 194.363066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:58.329621   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:31:58.329631   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:58.329643   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:58.329654   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:58.332800   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:31:58.332816   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:58.332823   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:58.332828   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:58.332833   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:58 GMT
	I1212 20:31:58.332838   33042 round_trippers.go:580]     Audit-Id: 4fd67c39-f5b0-42eb-87bd-b5dee54e6494
	I1212 20:31:58.332843   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:58.332848   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:58.333018   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:31:58.333422   33042 pod_ready.go:92] pod "kube-scheduler-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:31:58.333448   33042 pod_ready.go:81] duration metric: took 401.305169ms waiting for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:31:58.333462   33042 pod_ready.go:38] duration metric: took 1.601199483s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:31:58.333480   33042 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:31:58.333535   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:31:58.345987   33042 system_svc.go:56] duration metric: took 12.501539ms WaitForService to wait for kubelet.
	I1212 20:31:58.346011   33042 kubeadm.go:581] duration metric: took 1.635123224s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:31:58.346030   33042 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:31:58.529467   33042 request.go:629] Waited for 183.368758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I1212 20:31:58.529532   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I1212 20:31:58.529537   33042 round_trippers.go:469] Request Headers:
	I1212 20:31:58.529544   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:31:58.529551   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:31:58.532751   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:31:58.532776   33042 round_trippers.go:577] Response Headers:
	I1212 20:31:58.532786   33042 round_trippers.go:580]     Audit-Id: a61546d5-e69e-445b-8730-7544479e4090
	I1212 20:31:58.532793   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:31:58.532801   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:31:58.532810   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:31:58.532817   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:31:58.532824   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:31:58 GMT
	I1212 20:31:58.533029   33042 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1033"},"items":[{"metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16209 chars]
	I1212 20:31:58.533870   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:31:58.533900   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:31:58.533913   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:31:58.533921   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:31:58.533930   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:31:58.533937   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:31:58.533951   33042 node_conditions.go:105] duration metric: took 187.914627ms to run NodePressure ...
	I1212 20:31:58.533963   33042 start.go:228] waiting for startup goroutines ...
	I1212 20:31:58.533991   33042 start.go:242] writing updated cluster config ...
	I1212 20:31:58.534615   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:31:58.534739   33042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:31:58.536901   33042 out.go:177] * Starting worker node multinode-562818-m03 in cluster multinode-562818
	I1212 20:31:58.538397   33042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 20:31:58.538438   33042 cache.go:56] Caching tarball of preloaded images
	I1212 20:31:58.538536   33042 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:31:58.538550   33042 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 20:31:58.538648   33042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/config.json ...
	I1212 20:31:58.538838   33042 start.go:365] acquiring machines lock for multinode-562818-m03: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:31:58.538882   33042 start.go:369] acquired machines lock for "multinode-562818-m03" in 25.525µs
	I1212 20:31:58.538897   33042 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:31:58.538905   33042 fix.go:54] fixHost starting: m03
	I1212 20:31:58.539150   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:31:58.539180   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:31:58.553538   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I1212 20:31:58.553922   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:31:58.554404   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:31:58.554423   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:31:58.554738   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:31:58.554996   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:31:58.555130   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetState
	I1212 20:31:58.556850   33042 fix.go:102] recreateIfNeeded on multinode-562818-m03: state=Running err=<nil>
	W1212 20:31:58.556871   33042 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:31:58.558589   33042 out.go:177] * Updating the running kvm2 "multinode-562818-m03" VM ...
	I1212 20:31:58.559837   33042 machine.go:88] provisioning docker machine ...
	I1212 20:31:58.559859   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:31:58.560091   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetMachineName
	I1212 20:31:58.560267   33042 buildroot.go:166] provisioning hostname "multinode-562818-m03"
	I1212 20:31:58.560292   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetMachineName
	I1212 20:31:58.560442   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:31:58.562883   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.563334   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:31:58.563356   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.563472   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:31:58.563666   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:58.563869   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:58.564032   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:31:58.564207   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:31:58.564533   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1212 20:31:58.564551   33042 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-562818-m03 && echo "multinode-562818-m03" | sudo tee /etc/hostname
	I1212 20:31:58.706143   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-562818-m03
	
	I1212 20:31:58.706166   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:31:58.708889   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.709227   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:31:58.709256   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.709398   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:31:58.709576   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:58.709731   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:58.709860   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:31:58.710069   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:31:58.710384   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1212 20:31:58.710402   33042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-562818-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-562818-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-562818-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:31:58.836065   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:31:58.836093   33042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:31:58.836112   33042 buildroot.go:174] setting up certificates
	I1212 20:31:58.836123   33042 provision.go:83] configureAuth start
	I1212 20:31:58.836134   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetMachineName
	I1212 20:31:58.836403   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetIP
	I1212 20:31:58.839119   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.839516   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:31:58.839552   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.839719   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:31:58.841807   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.842122   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:31:58.842148   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:58.842283   33042 provision.go:138] copyHostCerts
	I1212 20:31:58.842307   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:31:58.842331   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:31:58.842348   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:31:58.842427   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:31:58.842502   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:31:58.842519   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:31:58.842525   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:31:58.842548   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:31:58.842597   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:31:58.842613   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:31:58.842618   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:31:58.842638   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:31:58.842684   33042 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.multinode-562818-m03 san=[192.168.39.101 192.168.39.101 localhost 127.0.0.1 minikube multinode-562818-m03]
	I1212 20:31:59.059007   33042 provision.go:172] copyRemoteCerts
	I1212 20:31:59.059072   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:31:59.059098   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:31:59.061858   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:59.062291   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:31:59.062320   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:59.062556   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:31:59.062763   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:59.062959   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:31:59.063103   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m03/id_rsa Username:docker}
	I1212 20:31:59.153536   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:31:59.153617   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 20:31:59.177077   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:31:59.177158   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:31:59.200385   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:31:59.200469   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:31:59.224263   33042 provision.go:86] duration metric: configureAuth took 388.126864ms
	I1212 20:31:59.224304   33042 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:31:59.224516   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:31:59.224582   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:31:59.227281   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:59.227655   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:31:59.227687   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:31:59.227863   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:31:59.228086   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:59.228334   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:31:59.228524   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:31:59.228747   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:31:59.229088   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1212 20:31:59.229104   33042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:33:29.755721   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:33:29.755756   33042 machine.go:91] provisioned docker machine in 1m31.19590526s
	I1212 20:33:29.755770   33042 start.go:300] post-start starting for "multinode-562818-m03" (driver="kvm2")
	I1212 20:33:29.755783   33042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:33:29.755806   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:33:29.756202   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:33:29.756229   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:33:29.759350   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:29.759786   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:33:29.759822   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:29.760027   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:33:29.760245   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:33:29.760419   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:33:29.760564   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m03/id_rsa Username:docker}
	I1212 20:33:29.853696   33042 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:33:29.858204   33042 command_runner.go:130] > NAME=Buildroot
	I1212 20:33:29.858232   33042 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 20:33:29.858239   33042 command_runner.go:130] > ID=buildroot
	I1212 20:33:29.858247   33042 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 20:33:29.858254   33042 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 20:33:29.858615   33042 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:33:29.858642   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:33:29.858711   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:33:29.858805   33042 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:33:29.858818   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /etc/ssl/certs/164562.pem
	I1212 20:33:29.858929   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:33:29.867811   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:33:29.892539   33042 start.go:303] post-start completed in 136.755482ms
	I1212 20:33:29.892563   33042 fix.go:56] fixHost completed within 1m31.353657985s
	I1212 20:33:29.892584   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:33:29.895410   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:29.895774   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:33:29.895793   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:29.896009   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:33:29.896202   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:33:29.896363   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:33:29.896481   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:33:29.896650   33042 main.go:141] libmachine: Using SSH client type: native
	I1212 20:33:29.896975   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1212 20:33:29.896988   33042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:33:30.024114   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702413210.016351514
	
	I1212 20:33:30.024139   33042 fix.go:206] guest clock: 1702413210.016351514
	I1212 20:33:30.024146   33042 fix.go:219] Guest: 2023-12-12 20:33:30.016351514 +0000 UTC Remote: 2023-12-12 20:33:29.892567979 +0000 UTC m=+552.529619493 (delta=123.783535ms)
	I1212 20:33:30.024161   33042 fix.go:190] guest clock delta is within tolerance: 123.783535ms
	I1212 20:33:30.024166   33042 start.go:83] releasing machines lock for "multinode-562818-m03", held for 1m31.485273916s
	I1212 20:33:30.024182   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:33:30.024436   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetIP
	I1212 20:33:30.027144   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:30.027542   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:33:30.027572   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:30.029416   33042 out.go:177] * Found network options:
	I1212 20:33:30.030714   33042 out.go:177]   - NO_PROXY=192.168.39.77,192.168.39.65
	W1212 20:33:30.032131   33042 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 20:33:30.032151   33042 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 20:33:30.032180   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:33:30.032909   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:33:30.033105   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .DriverName
	I1212 20:33:30.033213   33042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:33:30.033252   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	W1212 20:33:30.033348   33042 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 20:33:30.033372   33042 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 20:33:30.033433   33042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:33:30.033456   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHHostname
	I1212 20:33:30.035903   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:30.035976   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:30.036283   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:33:30.036313   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:30.036342   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:33:30.036370   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:30.036476   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:33:30.036624   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHPort
	I1212 20:33:30.036669   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:33:30.036778   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHKeyPath
	I1212 20:33:30.036870   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:33:30.036949   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetSSHUsername
	I1212 20:33:30.037021   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m03/id_rsa Username:docker}
	I1212 20:33:30.037067   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m03/id_rsa Username:docker}
	I1212 20:33:30.154035   33042 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:33:30.271905   33042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:33:30.278753   33042 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:33:30.278868   33042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:33:30.278933   33042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:33:30.288060   33042 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:33:30.288085   33042 start.go:475] detecting cgroup driver to use...
	I1212 20:33:30.288152   33042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:33:30.302049   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:33:30.316821   33042 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:33:30.316897   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:33:30.331202   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:33:30.344164   33042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:33:30.465152   33042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:33:30.580070   33042 docker.go:219] disabling docker service ...
	I1212 20:33:30.580141   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:33:30.596288   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:33:30.609102   33042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:33:30.745417   33042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:33:30.873413   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:33:30.886653   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:33:30.904231   33042 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:33:30.904274   33042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 20:33:30.904328   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:33:30.915043   33042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:33:30.915117   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:33:30.928958   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:33:30.941174   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:33:30.955410   33042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:33:30.968295   33042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:33:30.977706   33042 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:33:30.977817   33042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:33:30.987922   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:33:31.121366   33042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:33:31.337832   33042 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:33:31.337909   33042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:33:31.342752   33042 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:33:31.342781   33042 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:33:31.342791   33042 command_runner.go:130] > Device: 16h/22d	Inode: 1169        Links: 1
	I1212 20:33:31.342802   33042 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:33:31.342810   33042 command_runner.go:130] > Access: 2023-12-12 20:33:31.267946996 +0000
	I1212 20:33:31.342818   33042 command_runner.go:130] > Modify: 2023-12-12 20:33:31.267946996 +0000
	I1212 20:33:31.342830   33042 command_runner.go:130] > Change: 2023-12-12 20:33:31.267946996 +0000
	I1212 20:33:31.342839   33042 command_runner.go:130] >  Birth: -
	I1212 20:33:31.342865   33042 start.go:543] Will wait 60s for crictl version
	I1212 20:33:31.342914   33042 ssh_runner.go:195] Run: which crictl
	I1212 20:33:31.346304   33042 command_runner.go:130] > /usr/bin/crictl
	I1212 20:33:31.346572   33042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:33:31.395307   33042 command_runner.go:130] > Version:  0.1.0
	I1212 20:33:31.395335   33042 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:33:31.395342   33042 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 20:33:31.395349   33042 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:33:31.395364   33042 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:33:31.395431   33042 ssh_runner.go:195] Run: crio --version
	I1212 20:33:31.441642   33042 command_runner.go:130] > crio version 1.24.1
	I1212 20:33:31.441667   33042 command_runner.go:130] > Version:          1.24.1
	I1212 20:33:31.441674   33042 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:33:31.441683   33042 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:33:31.441689   33042 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:33:31.441696   33042 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:33:31.441703   33042 command_runner.go:130] > Compiler:         gc
	I1212 20:33:31.441710   33042 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:33:31.441719   33042 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:33:31.441730   33042 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:33:31.441738   33042 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:33:31.441742   33042 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:33:31.441817   33042 ssh_runner.go:195] Run: crio --version
	I1212 20:33:31.495146   33042 command_runner.go:130] > crio version 1.24.1
	I1212 20:33:31.495170   33042 command_runner.go:130] > Version:          1.24.1
	I1212 20:33:31.495180   33042 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 20:33:31.495187   33042 command_runner.go:130] > GitTreeState:     dirty
	I1212 20:33:31.495194   33042 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 20:33:31.495200   33042 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 20:33:31.495207   33042 command_runner.go:130] > Compiler:         gc
	I1212 20:33:31.495213   33042 command_runner.go:130] > Platform:         linux/amd64
	I1212 20:33:31.495221   33042 command_runner.go:130] > Linkmode:         dynamic
	I1212 20:33:31.495236   33042 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 20:33:31.495263   33042 command_runner.go:130] > SeccompEnabled:   true
	I1212 20:33:31.495271   33042 command_runner.go:130] > AppArmorEnabled:  false
	I1212 20:33:31.498835   33042 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 20:33:31.501092   33042 out.go:177]   - env NO_PROXY=192.168.39.77
	I1212 20:33:31.502394   33042 out.go:177]   - env NO_PROXY=192.168.39.77,192.168.39.65
	I1212 20:33:31.503650   33042 main.go:141] libmachine: (multinode-562818-m03) Calling .GetIP
	I1212 20:33:31.506226   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:31.506615   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0a:be", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:21:59 +0000 UTC Type:0 Mac:52:54:00:0a:0a:be Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-562818-m03 Clientid:01:52:54:00:0a:0a:be}
	I1212 20:33:31.506639   33042 main.go:141] libmachine: (multinode-562818-m03) DBG | domain multinode-562818-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:0a:0a:be in network mk-multinode-562818
	I1212 20:33:31.506838   33042 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:33:31.510952   33042 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 20:33:31.511024   33042 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818 for IP: 192.168.39.101
	I1212 20:33:31.511043   33042 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:33:31.511192   33042 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:33:31.511226   33042 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:33:31.511246   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:33:31.511259   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:33:31.511271   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:33:31.511280   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:33:31.511330   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:33:31.511356   33042 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:33:31.511365   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:33:31.511386   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:33:31.511408   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:33:31.511429   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:33:31.511467   33042 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:33:31.511490   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:33:31.511503   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem -> /usr/share/ca-certificates/16456.pem
	I1212 20:33:31.511515   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> /usr/share/ca-certificates/164562.pem
	I1212 20:33:31.511798   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:33:31.535283   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:33:31.560303   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:33:31.584098   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:33:31.607731   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:33:31.632578   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:33:31.657017   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:33:31.681525   33042 ssh_runner.go:195] Run: openssl version
	I1212 20:33:31.687073   33042 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 20:33:31.687376   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:33:31.699269   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:33:31.703715   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:33:31.704099   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:33:31.704154   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:33:31.709470   33042 command_runner.go:130] > b5213941
	I1212 20:33:31.709790   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:33:31.720245   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:33:31.733818   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:33:31.738445   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:33:31.738640   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:33:31.738678   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:33:31.744124   33042 command_runner.go:130] > 51391683
	I1212 20:33:31.744467   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:33:31.753683   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:33:31.767295   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:33:31.773505   33042 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:33:31.773587   33042 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:33:31.773637   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:33:31.779788   33042 command_runner.go:130] > 3ec20f2e
	I1212 20:33:31.779853   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:33:31.790777   33042 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:33:31.795098   33042 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:33:31.795156   33042 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 20:33:31.795230   33042 ssh_runner.go:195] Run: crio config
	I1212 20:33:31.846532   33042 command_runner.go:130] ! time="2023-12-12 20:33:31.838878817Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 20:33:31.846559   33042 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:33:31.852184   33042 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:33:31.852205   33042 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:33:31.852250   33042 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:33:31.852259   33042 command_runner.go:130] > #
	I1212 20:33:31.852267   33042 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:33:31.852276   33042 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:33:31.852283   33042 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:33:31.852292   33042 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:33:31.852296   33042 command_runner.go:130] > # reload'.
	I1212 20:33:31.852302   33042 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:33:31.852310   33042 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:33:31.852316   33042 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:33:31.852324   33042 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:33:31.852328   33042 command_runner.go:130] > [crio]
	I1212 20:33:31.852336   33042 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:33:31.852342   33042 command_runner.go:130] > # containers images, in this directory.
	I1212 20:33:31.852349   33042 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 20:33:31.852358   33042 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:33:31.852365   33042 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 20:33:31.852371   33042 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:33:31.852380   33042 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:33:31.852389   33042 command_runner.go:130] > storage_driver = "overlay"
	I1212 20:33:31.852397   33042 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:33:31.852403   33042 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:33:31.852409   33042 command_runner.go:130] > storage_option = [
	I1212 20:33:31.852414   33042 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 20:33:31.852420   33042 command_runner.go:130] > ]
	I1212 20:33:31.852426   33042 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:33:31.852434   33042 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:33:31.852439   33042 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:33:31.852444   33042 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:33:31.852451   33042 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:33:31.852458   33042 command_runner.go:130] > # always happen on a node reboot
	I1212 20:33:31.852464   33042 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:33:31.852472   33042 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:33:31.852481   33042 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:33:31.852491   33042 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:33:31.852498   33042 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 20:33:31.852506   33042 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:33:31.852516   33042 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:33:31.852523   33042 command_runner.go:130] > # internal_wipe = true
	I1212 20:33:31.852529   33042 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:33:31.852537   33042 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:33:31.852544   33042 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:33:31.852552   33042 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:33:31.852564   33042 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:33:31.852570   33042 command_runner.go:130] > [crio.api]
	I1212 20:33:31.852576   33042 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:33:31.852583   33042 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:33:31.852589   33042 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:33:31.852596   33042 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:33:31.852603   33042 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:33:31.852610   33042 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:33:31.852614   33042 command_runner.go:130] > # stream_port = "0"
	I1212 20:33:31.852622   33042 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:33:31.852629   33042 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:33:31.852635   33042 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:33:31.852646   33042 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:33:31.852654   33042 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:33:31.852663   33042 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 20:33:31.852668   33042 command_runner.go:130] > # minutes.
	I1212 20:33:31.852673   33042 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:33:31.852682   33042 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:33:31.852690   33042 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 20:33:31.852694   33042 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:33:31.852701   33042 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:33:31.852710   33042 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:33:31.852721   33042 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 20:33:31.852727   33042 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:33:31.852739   33042 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:33:31.852749   33042 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 20:33:31.852760   33042 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 20:33:31.852774   33042 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 20:33:31.852798   33042 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:33:31.852813   33042 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:33:31.852820   33042 command_runner.go:130] > [crio.runtime]
	I1212 20:33:31.852829   33042 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:33:31.852840   33042 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:33:31.852849   33042 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:33:31.852858   33042 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:33:31.852867   33042 command_runner.go:130] > # default_ulimits = [
	I1212 20:33:31.852874   33042 command_runner.go:130] > # ]
	I1212 20:33:31.852884   33042 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:33:31.852893   33042 command_runner.go:130] > # no_pivot = false
	I1212 20:33:31.852906   33042 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:33:31.852919   33042 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:33:31.852933   33042 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:33:31.852946   33042 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:33:31.852957   33042 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:33:31.852970   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:33:31.852981   33042 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 20:33:31.852991   33042 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:33:31.853001   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:33:31.853010   33042 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:33:31.853019   33042 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:33:31.853028   33042 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:33:31.853038   33042 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:33:31.853052   33042 command_runner.go:130] > conmon_env = [
	I1212 20:33:31.853064   33042 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 20:33:31.853072   33042 command_runner.go:130] > ]
	I1212 20:33:31.853080   33042 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:33:31.853091   33042 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:33:31.853104   33042 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:33:31.853111   33042 command_runner.go:130] > # default_env = [
	I1212 20:33:31.853119   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853129   33042 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:33:31.853139   33042 command_runner.go:130] > # selinux = false
	I1212 20:33:31.853146   33042 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:33:31.853155   33042 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 20:33:31.853161   33042 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 20:33:31.853166   33042 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:33:31.853172   33042 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 20:33:31.853180   33042 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 20:33:31.853188   33042 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 20:33:31.853195   33042 command_runner.go:130] > # which might increase security.
	I1212 20:33:31.853202   33042 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 20:33:31.853209   33042 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:33:31.853217   33042 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:33:31.853225   33042 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:33:31.853235   33042 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:33:31.853242   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:33:31.853249   33042 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:33:31.853255   33042 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:33:31.853262   33042 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:33:31.853267   33042 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:33:31.853276   33042 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:33:31.853280   33042 command_runner.go:130] > # irqbalance daemon.
	I1212 20:33:31.853288   33042 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:33:31.853297   33042 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:33:31.853304   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:33:31.853310   33042 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:33:31.853316   33042 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:33:31.853323   33042 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:33:31.853329   33042 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:33:31.853336   33042 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:33:31.853342   33042 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:33:31.853350   33042 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:33:31.853355   33042 command_runner.go:130] > # will be added.
	I1212 20:33:31.853359   33042 command_runner.go:130] > # default_capabilities = [
	I1212 20:33:31.853365   33042 command_runner.go:130] > # 	"CHOWN",
	I1212 20:33:31.853369   33042 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:33:31.853374   33042 command_runner.go:130] > # 	"FSETID",
	I1212 20:33:31.853381   33042 command_runner.go:130] > # 	"FOWNER",
	I1212 20:33:31.853385   33042 command_runner.go:130] > # 	"SETGID",
	I1212 20:33:31.853391   33042 command_runner.go:130] > # 	"SETUID",
	I1212 20:33:31.853395   33042 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:33:31.853401   33042 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:33:31.853405   33042 command_runner.go:130] > # 	"KILL",
	I1212 20:33:31.853411   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853418   33042 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:33:31.853426   33042 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:33:31.853432   33042 command_runner.go:130] > # default_sysctls = [
	I1212 20:33:31.853435   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853442   33042 command_runner.go:130] > # List of devices on the host that a
	I1212 20:33:31.853448   33042 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:33:31.853455   33042 command_runner.go:130] > # allowed_devices = [
	I1212 20:33:31.853459   33042 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:33:31.853465   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853470   33042 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:33:31.853480   33042 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:33:31.853486   33042 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:33:31.853509   33042 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:33:31.853516   33042 command_runner.go:130] > # additional_devices = [
	I1212 20:33:31.853519   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853524   33042 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:33:31.853528   33042 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:33:31.853533   33042 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:33:31.853539   33042 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:33:31.853543   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853552   33042 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:33:31.853558   33042 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:33:31.853564   33042 command_runner.go:130] > # Defaults to false.
	I1212 20:33:31.853569   33042 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:33:31.853578   33042 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:33:31.853586   33042 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:33:31.853590   33042 command_runner.go:130] > # hooks_dir = [
	I1212 20:33:31.853597   33042 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:33:31.853601   33042 command_runner.go:130] > # ]
	I1212 20:33:31.853609   33042 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:33:31.853619   33042 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:33:31.853627   33042 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:33:31.853632   33042 command_runner.go:130] > #
	I1212 20:33:31.853638   33042 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:33:31.853651   33042 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:33:31.853657   33042 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:33:31.853663   33042 command_runner.go:130] > #
	I1212 20:33:31.853669   33042 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:33:31.853678   33042 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:33:31.853686   33042 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:33:31.853692   33042 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:33:31.853696   33042 command_runner.go:130] > #
	I1212 20:33:31.853700   33042 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:33:31.853705   33042 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:33:31.853714   33042 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:33:31.853720   33042 command_runner.go:130] > pids_limit = 1024
	I1212 20:33:31.853727   33042 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:33:31.853735   33042 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:33:31.853741   33042 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:33:31.853751   33042 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:33:31.853756   33042 command_runner.go:130] > # log_size_max = -1
	I1212 20:33:31.853764   33042 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 20:33:31.853768   33042 command_runner.go:130] > # log_to_journald = false
	I1212 20:33:31.853774   33042 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:33:31.853781   33042 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:33:31.853788   33042 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:33:31.853795   33042 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:33:31.853801   33042 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:33:31.853808   33042 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:33:31.853813   33042 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:33:31.853820   33042 command_runner.go:130] > # read_only = false
	I1212 20:33:31.853825   33042 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:33:31.853831   33042 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:33:31.853838   33042 command_runner.go:130] > # live configuration reload.
	I1212 20:33:31.853843   33042 command_runner.go:130] > # log_level = "info"
	I1212 20:33:31.853851   33042 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:33:31.853856   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:33:31.853863   33042 command_runner.go:130] > # log_filter = ""
	I1212 20:33:31.853869   33042 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:33:31.853877   33042 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:33:31.853884   33042 command_runner.go:130] > # separated by comma.
	I1212 20:33:31.853888   33042 command_runner.go:130] > # uid_mappings = ""
	I1212 20:33:31.853896   33042 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:33:31.853902   33042 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:33:31.853908   33042 command_runner.go:130] > # separated by comma.
	I1212 20:33:31.853913   33042 command_runner.go:130] > # gid_mappings = ""
	I1212 20:33:31.853921   33042 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:33:31.853928   33042 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:33:31.853936   33042 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:33:31.853945   33042 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:33:31.853950   33042 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:33:31.853959   33042 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:33:31.853967   33042 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:33:31.853974   33042 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:33:31.853980   33042 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:33:31.853988   33042 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:33:31.853996   33042 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:33:31.854000   33042 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:33:31.854008   33042 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:33:31.854014   33042 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:33:31.854020   33042 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:33:31.854025   33042 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:33:31.854033   33042 command_runner.go:130] > drop_infra_ctr = false
	I1212 20:33:31.854040   33042 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:33:31.854049   33042 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:33:31.854056   33042 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:33:31.854061   33042 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:33:31.854066   33042 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:33:31.854073   33042 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:33:31.854078   33042 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:33:31.854088   33042 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:33:31.854093   33042 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 20:33:31.854111   33042 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:33:31.854117   33042 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 20:33:31.854125   33042 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 20:33:31.854132   33042 command_runner.go:130] > # default_runtime = "runc"
	I1212 20:33:31.854137   33042 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:33:31.854147   33042 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:33:31.854180   33042 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 20:33:31.854194   33042 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:33:31.854202   33042 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:33:31.854207   33042 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:33:31.854212   33042 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:33:31.854218   33042 command_runner.go:130] > # ]
	I1212 20:33:31.854225   33042 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:33:31.854234   33042 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:33:31.854242   33042 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 20:33:31.854251   33042 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 20:33:31.854256   33042 command_runner.go:130] > #
	I1212 20:33:31.854262   33042 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 20:33:31.854269   33042 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 20:33:31.854276   33042 command_runner.go:130] > #  runtime_type = "oci"
	I1212 20:33:31.854282   33042 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 20:33:31.854289   33042 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 20:33:31.854294   33042 command_runner.go:130] > #  allowed_annotations = []
	I1212 20:33:31.854300   33042 command_runner.go:130] > # Where:
	I1212 20:33:31.854305   33042 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 20:33:31.854314   33042 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 20:33:31.854322   33042 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:33:31.854330   33042 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:33:31.854336   33042 command_runner.go:130] > #   in $PATH.
	I1212 20:33:31.854348   33042 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 20:33:31.854355   33042 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:33:31.854361   33042 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 20:33:31.854367   33042 command_runner.go:130] > #   state.
	I1212 20:33:31.854375   33042 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:33:31.854386   33042 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:33:31.854394   33042 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:33:31.854402   33042 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:33:31.854411   33042 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:33:31.854420   33042 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:33:31.854427   33042 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:33:31.854433   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:33:31.854443   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:33:31.854456   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:33:31.854471   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:33:31.854478   33042 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:33:31.854487   33042 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:33:31.854495   33042 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:33:31.854504   33042 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 20:33:31.854511   33042 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:33:31.854516   33042 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:33:31.854522   33042 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 20:33:31.854526   33042 command_runner.go:130] > runtime_type = "oci"
	I1212 20:33:31.854533   33042 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:33:31.854538   33042 command_runner.go:130] > runtime_config_path = ""
	I1212 20:33:31.854544   33042 command_runner.go:130] > monitor_path = ""
	I1212 20:33:31.854548   33042 command_runner.go:130] > monitor_cgroup = ""
	I1212 20:33:31.854555   33042 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:33:31.854561   33042 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 20:33:31.854567   33042 command_runner.go:130] > # running containers
	I1212 20:33:31.854572   33042 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 20:33:31.854580   33042 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 20:33:31.854606   33042 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 20:33:31.854614   33042 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 20:33:31.854622   33042 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 20:33:31.854629   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 20:33:31.854634   33042 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 20:33:31.854648   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 20:33:31.854655   33042 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 20:33:31.854660   33042 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 20:33:31.854668   33042 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:33:31.854676   33042 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:33:31.854684   33042 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:33:31.854694   33042 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:33:31.854701   33042 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 20:33:31.854709   33042 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:33:31.854719   33042 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:33:31.854729   33042 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:33:31.854738   33042 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:33:31.854747   33042 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:33:31.854753   33042 command_runner.go:130] > # Example:
	I1212 20:33:31.854760   33042 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:33:31.854766   33042 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:33:31.854771   33042 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:33:31.854777   33042 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:33:31.854783   33042 command_runner.go:130] > # cpuset = 0
	I1212 20:33:31.854788   33042 command_runner.go:130] > # cpushares = "0-1"
	I1212 20:33:31.854794   33042 command_runner.go:130] > # Where:
	I1212 20:33:31.854798   33042 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:33:31.854807   33042 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:33:31.854815   33042 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:33:31.854821   33042 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:33:31.854830   33042 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:33:31.854835   33042 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:33:31.854839   33042 command_runner.go:130] > # 
	I1212 20:33:31.854848   33042 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:33:31.854852   33042 command_runner.go:130] > #
	I1212 20:33:31.854858   33042 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:33:31.854866   33042 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 20:33:31.854874   33042 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 20:33:31.854881   33042 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 20:33:31.854889   33042 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 20:33:31.854908   33042 command_runner.go:130] > [crio.image]
	I1212 20:33:31.854914   33042 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:33:31.854921   33042 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:33:31.854927   33042 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:33:31.854935   33042 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:33:31.854943   33042 command_runner.go:130] > # global_auth_file = ""
	I1212 20:33:31.854950   33042 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:33:31.854956   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:33:31.854963   33042 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 20:33:31.854970   33042 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:33:31.854978   33042 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:33:31.854983   33042 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:33:31.854990   33042 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:33:31.854996   33042 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:33:31.855005   33042 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:33:31.855011   33042 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:33:31.855019   33042 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:33:31.855026   33042 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:33:31.855033   33042 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:33:31.855041   33042 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:33:31.855050   33042 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:33:31.855058   33042 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:33:31.855066   33042 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:33:31.855072   33042 command_runner.go:130] > # signature_policy = ""
	I1212 20:33:31.855078   33042 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:33:31.855086   33042 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:33:31.855093   33042 command_runner.go:130] > # changing them here.
	I1212 20:33:31.855097   33042 command_runner.go:130] > # insecure_registries = [
	I1212 20:33:31.855103   33042 command_runner.go:130] > # ]
	I1212 20:33:31.855111   33042 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:33:31.855119   33042 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:33:31.855123   33042 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:33:31.855130   33042 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:33:31.855136   33042 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:33:31.855142   33042 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:33:31.855149   33042 command_runner.go:130] > # CNI plugins.
	I1212 20:33:31.855153   33042 command_runner.go:130] > [crio.network]
	I1212 20:33:31.855161   33042 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:33:31.855168   33042 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:33:31.855173   33042 command_runner.go:130] > # cni_default_network = ""
	I1212 20:33:31.855179   33042 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:33:31.855186   33042 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:33:31.855192   33042 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:33:31.855198   33042 command_runner.go:130] > # plugin_dirs = [
	I1212 20:33:31.855203   33042 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:33:31.855209   33042 command_runner.go:130] > # ]
	I1212 20:33:31.855215   33042 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:33:31.855221   33042 command_runner.go:130] > [crio.metrics]
	I1212 20:33:31.855226   33042 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:33:31.855233   33042 command_runner.go:130] > enable_metrics = true
	I1212 20:33:31.855250   33042 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:33:31.855262   33042 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:33:31.855272   33042 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:33:31.855283   33042 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:33:31.855291   33042 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:33:31.855295   33042 command_runner.go:130] > # metrics_collectors = [
	I1212 20:33:31.855301   33042 command_runner.go:130] > # 	"operations",
	I1212 20:33:31.855305   33042 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 20:33:31.855312   33042 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 20:33:31.855317   33042 command_runner.go:130] > # 	"operations_errors",
	I1212 20:33:31.855324   33042 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 20:33:31.855331   33042 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 20:33:31.855336   33042 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 20:33:31.855342   33042 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 20:33:31.855347   33042 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 20:33:31.855353   33042 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:33:31.855358   33042 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 20:33:31.855365   33042 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:33:31.855369   33042 command_runner.go:130] > # 	"containers_oom",
	I1212 20:33:31.855375   33042 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:33:31.855379   33042 command_runner.go:130] > # 	"operations_total",
	I1212 20:33:31.855386   33042 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:33:31.855391   33042 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:33:31.855398   33042 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:33:31.855402   33042 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:33:31.855411   33042 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:33:31.855418   33042 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:33:31.855423   33042 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:33:31.855430   33042 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:33:31.855434   33042 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:33:31.855440   33042 command_runner.go:130] > # ]
	I1212 20:33:31.855445   33042 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:33:31.855449   33042 command_runner.go:130] > # metrics_port = 9090
	I1212 20:33:31.855456   33042 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:33:31.855461   33042 command_runner.go:130] > # metrics_socket = ""
	I1212 20:33:31.855468   33042 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:33:31.855474   33042 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:33:31.855482   33042 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:33:31.855490   33042 command_runner.go:130] > # certificate on any modification event.
	I1212 20:33:31.855494   33042 command_runner.go:130] > # metrics_cert = ""
	I1212 20:33:31.855501   33042 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:33:31.855509   33042 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:33:31.855514   33042 command_runner.go:130] > # metrics_key = ""
	I1212 20:33:31.855523   33042 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:33:31.855529   33042 command_runner.go:130] > [crio.tracing]
	I1212 20:33:31.855534   33042 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:33:31.855541   33042 command_runner.go:130] > # enable_tracing = false
	I1212 20:33:31.855547   33042 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:33:31.855553   33042 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 20:33:31.855559   33042 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 20:33:31.855566   33042 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:33:31.855572   33042 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:33:31.855578   33042 command_runner.go:130] > [crio.stats]
	I1212 20:33:31.855584   33042 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:33:31.855592   33042 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:33:31.855598   33042 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:33:31.855672   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:33:31.855685   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:33:31.855696   33042 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:33:31.855718   33042 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.101 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-562818 NodeName:multinode-562818-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:33:31.855833   33042 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-562818-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:33:31.855882   33042 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-562818-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:33:31.855934   33042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 20:33:31.866371   33042 command_runner.go:130] > kubeadm
	I1212 20:33:31.866393   33042 command_runner.go:130] > kubectl
	I1212 20:33:31.866399   33042 command_runner.go:130] > kubelet
	I1212 20:33:31.866420   33042 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 20:33:31.866474   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 20:33:31.876045   33042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1212 20:33:31.892281   33042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:33:31.907933   33042 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I1212 20:33:31.911588   33042 command_runner.go:130] > 192.168.39.77	control-plane.minikube.internal
	I1212 20:33:31.911652   33042 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:33:31.911991   33042 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:33:31.912057   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:33:31.912101   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:33:31.926863   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I1212 20:33:31.927265   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:33:31.927741   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:33:31.927764   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:33:31.928042   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:33:31.928220   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:33:31.928350   33042 start.go:304] JoinCluster: &{Name:multinode-562818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-562818 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:33:31.928468   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 20:33:31.928488   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:33:31.931077   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:33:31.931485   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:33:31.931526   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:33:31.931602   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:33:31.931778   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:33:31.931928   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:33:31.932055   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:33:32.104911   33042 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v1zwlj.8uxjzq6nhv2hidi2 --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 20:33:32.107769   33042 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 20:33:32.107828   33042 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:33:32.108109   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:33:32.108147   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:33:32.122087   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I1212 20:33:32.122480   33042 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:33:32.122919   33042 main.go:141] libmachine: Using API Version  1
	I1212 20:33:32.122938   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:33:32.123204   33042 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:33:32.123374   33042 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:33:32.123555   33042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-562818-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1212 20:33:32.123578   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:33:32.126543   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:33:32.126959   33042 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:29:27 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:33:32.126979   33042 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:33:32.127131   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:33:32.127306   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:33:32.127483   33042 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:33:32.127624   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:33:32.280886   33042 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1212 20:33:32.351008   33042 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-q7n6w, kube-system/kube-proxy-xch7v
	I1212 20:33:35.374745   33042 command_runner.go:130] > node/multinode-562818-m03 cordoned
	I1212 20:33:35.374779   33042 command_runner.go:130] > pod "busybox-5bc68d56bd-98xh8" has DeletionTimestamp older than 1 seconds, skipping
	I1212 20:33:35.374790   33042 command_runner.go:130] > node/multinode-562818-m03 drained
	I1212 20:33:35.374817   33042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-562818-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.251233884s)
	I1212 20:33:35.374833   33042 node.go:108] successfully drained node "m03"
	I1212 20:33:35.375291   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:33:35.375566   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:33:35.375856   33042 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1212 20:33:35.375906   33042 round_trippers.go:463] DELETE https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:33:35.375917   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:35.375928   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:35.375937   33042 round_trippers.go:473]     Content-Type: application/json
	I1212 20:33:35.375946   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:35.391322   33042 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1212 20:33:35.391357   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:35.391365   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:35.391371   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:35.391377   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:35.391382   33042 round_trippers.go:580]     Content-Length: 171
	I1212 20:33:35.391387   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:35 GMT
	I1212 20:33:35.391393   33042 round_trippers.go:580]     Audit-Id: 09a7781c-a2d7-47c1-aab1-af7152c7866c
	I1212 20:33:35.391399   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:35.391427   33042 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-562818-m03","kind":"nodes","uid":"86ea80af-5628-4573-839f-f5590d741ec8"}}
	I1212 20:33:35.391474   33042 node.go:124] successfully deleted node "m03"
	I1212 20:33:35.391489   33042 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 20:33:35.391513   33042 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 20:33:35.391534   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v1zwlj.8uxjzq6nhv2hidi2 --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-562818-m03"
	I1212 20:33:35.451953   33042 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 20:33:35.645791   33042 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 20:33:35.645827   33042 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 20:33:35.719753   33042 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:33:35.719898   33042 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:33:35.720474   33042 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 20:33:35.876537   33042 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 20:33:36.401171   33042 command_runner.go:130] > This node has joined the cluster:
	I1212 20:33:36.401193   33042 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 20:33:36.401200   33042 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 20:33:36.401206   33042 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 20:33:36.404345   33042 command_runner.go:130] ! W1212 20:33:35.443908    2393 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 20:33:36.404374   33042 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1212 20:33:36.404386   33042 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1212 20:33:36.404400   33042 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1212 20:33:36.404421   33042 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v1zwlj.8uxjzq6nhv2hidi2 --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-562818-m03": (1.01287171s)
	I1212 20:33:36.404442   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 20:33:36.690613   33042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=multinode-562818 minikube.k8s.io/updated_at=2023_12_12T20_33_36_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:33:36.802096   33042 command_runner.go:130] > node/multinode-562818-m02 labeled
	I1212 20:33:36.815002   33042 command_runner.go:130] > node/multinode-562818-m03 labeled
	I1212 20:33:36.816646   33042 start.go:306] JoinCluster complete in 4.888291762s
	I1212 20:33:36.816674   33042 cni.go:84] Creating CNI manager for ""
	I1212 20:33:36.816682   33042 cni.go:136] 3 nodes found, recommending kindnet
	I1212 20:33:36.816740   33042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:33:36.822233   33042 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 20:33:36.822256   33042 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 20:33:36.822266   33042 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 20:33:36.822275   33042 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:33:36.822284   33042 command_runner.go:130] > Access: 2023-12-12 20:29:28.351313795 +0000
	I1212 20:33:36.822292   33042 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 20:33:36.822298   33042 command_runner.go:130] > Change: 2023-12-12 20:29:26.512313795 +0000
	I1212 20:33:36.822304   33042 command_runner.go:130] >  Birth: -
	I1212 20:33:36.822577   33042 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 20:33:36.822646   33042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 20:33:36.843960   33042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:33:37.193262   33042 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:33:37.197999   33042 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 20:33:37.201775   33042 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 20:33:37.217856   33042 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 20:33:37.224028   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:33:37.224257   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:33:37.224527   33042 round_trippers.go:463] GET https://192.168.39.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 20:33:37.224541   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.224552   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.224561   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.227251   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:37.227269   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.227277   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.227285   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.227292   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.227301   33042 round_trippers.go:580]     Content-Length: 291
	I1212 20:33:37.227317   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.227330   33042 round_trippers.go:580]     Audit-Id: 38309f71-0b1f-4e09-aaf0-c6ccd3058e8b
	I1212 20:33:37.227342   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.227368   33042 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ede74add-216c-497a-8a4e-0f24b8beccc3","resourceVersion":"849","creationTimestamp":"2023-12-12T20:19:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 20:33:37.227457   33042 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-562818" context rescaled to 1 replicas
	I1212 20:33:37.227488   33042 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 20:33:37.229380   33042 out.go:177] * Verifying Kubernetes components...
	I1212 20:33:37.230788   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:33:37.244648   33042 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:33:37.244938   33042 kapi.go:59] client config for multinode-562818: &rest.Config{Host:"https://192.168.39.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/multinode-562818/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:33:37.245218   33042 node_ready.go:35] waiting up to 6m0s for node "multinode-562818-m03" to be "Ready" ...
	I1212 20:33:37.245299   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:33:37.245309   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.245321   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.245334   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.247778   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:37.247799   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.247808   33042 round_trippers.go:580]     Audit-Id: 8dd3e951-2f43-4096-adf8-20c7efea0788
	I1212 20:33:37.247816   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.247824   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.247837   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.247845   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.247853   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.248177   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m03","uid":"94632321-4471-4b6c-b449-cdfe24f82c2b","resourceVersion":"1189","creationTimestamp":"2023-12-12T20:33:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_33_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:33:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 20:33:37.248494   33042 node_ready.go:49] node "multinode-562818-m03" has status "Ready":"True"
	I1212 20:33:37.248514   33042 node_ready.go:38] duration metric: took 3.278263ms waiting for node "multinode-562818-m03" to be "Ready" ...
	I1212 20:33:37.248523   33042 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:33:37.248589   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I1212 20:33:37.248601   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.248608   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.248617   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.252748   33042 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 20:33:37.252764   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.252773   33042 round_trippers.go:580]     Audit-Id: ede61327-4326-4d50-b79a-884d831b7b77
	I1212 20:33:37.252781   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.252788   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.252799   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.252810   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.252821   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.254764   33042 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1196"},"items":[{"metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82038 chars]
	I1212 20:33:37.257063   33042 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.257133   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-689lp
	I1212 20:33:37.257146   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.257154   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.257160   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.259356   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:37.259376   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.259386   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.259394   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.259405   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.259416   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.259428   33042 round_trippers.go:580]     Audit-Id: 91474c55-df88-410a-920d-c5b2afeb3210
	I1212 20:33:37.259439   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.259577   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-689lp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e77852fc-eb8a-4027-98e1-070b4ca43f54","resourceVersion":"837","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2dbf7577-92f2-4991-92a5-5bbf657fc5b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2dbf7577-92f2-4991-92a5-5bbf657fc5b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 20:33:37.259976   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:37.259990   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.259997   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.260006   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.262485   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:37.262505   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.262514   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.262527   33042 round_trippers.go:580]     Audit-Id: 02cb5dce-1909-40aa-9710-f64cb9642414
	I1212 20:33:37.262539   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.262551   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.262562   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.262574   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.262735   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:33:37.263131   33042 pod_ready.go:92] pod "coredns-5dd5756b68-689lp" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:37.263156   33042 pod_ready.go:81] duration metric: took 6.071886ms waiting for pod "coredns-5dd5756b68-689lp" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.263165   33042 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.263215   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-562818
	I1212 20:33:37.263224   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.263231   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.263253   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.265122   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:33:37.265136   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.265142   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.265150   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.265161   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.265171   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.265179   33042 round_trippers.go:580]     Audit-Id: b9973901-ff14-4598-b92c-0fd435779297
	I1212 20:33:37.265185   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.265380   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-562818","namespace":"kube-system","uid":"5a874e4d-12ab-400c-8086-05073ffd1b13","resourceVersion":"831","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.77:2379","kubernetes.io/config.hash":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.mirror":"e147e28129df59a83fcfb97d45da77e4","kubernetes.io/config.seen":"2023-12-12T20:19:35.712592681Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 20:33:37.265705   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:37.265717   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.265724   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.265733   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.267712   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:33:37.267732   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.267743   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.267752   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.267766   33042 round_trippers.go:580]     Audit-Id: 514d9d30-3918-4371-96d6-0bb36d5e33f5
	I1212 20:33:37.267776   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.267788   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.267804   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.267936   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:33:37.268294   33042 pod_ready.go:92] pod "etcd-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:37.268315   33042 pod_ready.go:81] duration metric: took 5.138614ms waiting for pod "etcd-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.268338   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.268398   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-562818
	I1212 20:33:37.268411   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.268421   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.268435   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.270219   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:33:37.270231   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.270237   33042 round_trippers.go:580]     Audit-Id: c919e32b-63df-4fed-b7ec-9c019a1305e8
	I1212 20:33:37.270244   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.270252   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.270267   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.270274   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.270280   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.270447   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-562818","namespace":"kube-system","uid":"7d766a87-0f52-46ef-b1fb-392a197bca9a","resourceVersion":"857","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.77:8443","kubernetes.io/config.hash":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.mirror":"193a44f373aa39bf67a4fef20e3c8d27","kubernetes.io/config.seen":"2023-12-12T20:19:35.712596975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 20:33:37.270784   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:37.270796   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.270803   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.270811   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.272535   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:33:37.272556   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.272566   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.272575   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.272595   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.272604   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.272617   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.272628   33042 round_trippers.go:580]     Audit-Id: 3312f73a-e22c-4da6-a966-3b655b8569fb
	I1212 20:33:37.272908   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:33:37.273151   33042 pod_ready.go:92] pod "kube-apiserver-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:37.273163   33042 pod_ready.go:81] duration metric: took 4.812167ms waiting for pod "kube-apiserver-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.273170   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.273206   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-562818
	I1212 20:33:37.273213   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.273221   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.273228   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.274854   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:33:37.274874   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.274885   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.274897   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.274910   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.274923   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.274935   33042 round_trippers.go:580]     Audit-Id: d95d061e-0d2b-4eb6-8c90-285501d691bd
	I1212 20:33:37.274948   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.275098   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-562818","namespace":"kube-system","uid":"23b73a4b-e188-4b7c-a13d-1fd61862a4e1","resourceVersion":"846","creationTimestamp":"2023-12-12T20:19:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.mirror":"a7cd7c8c41f9e966d5f21f814b258e09","kubernetes.io/config.seen":"2023-12-12T20:19:35.712598374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 20:33:37.275404   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:37.275416   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.275422   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.275429   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.277194   33042 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 20:33:37.277214   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.277225   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.277236   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.277250   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.277260   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.277269   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.277281   33042 round_trippers.go:580]     Audit-Id: 081dd6b1-a6f1-4d53-bd45-c67d84d35123
	I1212 20:33:37.277399   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:33:37.277753   33042 pod_ready.go:92] pod "kube-controller-manager-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:37.277773   33042 pod_ready.go:81] duration metric: took 4.596309ms waiting for pod "kube-controller-manager-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.277788   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.446001   33042 request.go:629] Waited for 168.144755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:33:37.446077   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rrmn
	I1212 20:33:37.446083   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.446091   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.446101   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.449486   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:33:37.449509   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.449520   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.449529   33042 round_trippers.go:580]     Audit-Id: 63769328-d572-474b-a24b-b89c28e24cd6
	I1212 20:33:37.449538   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.449543   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.449548   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.449554   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.449693   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4rrmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"2bcd718f-0c7c-461a-895e-44a0c1d566fd","resourceVersion":"816","creationTimestamp":"2023-12-12T20:19:48Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 20:33:37.645961   33042 request.go:629] Waited for 195.888851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:37.646022   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:37.646031   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.646043   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.646061   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.649499   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:33:37.649522   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.649529   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.649534   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.649540   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.649545   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.649551   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.649560   33042 round_trippers.go:580]     Audit-Id: 7aaf4e82-e24f-46eb-90a6-c5040e66b108
	I1212 20:33:37.649704   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:33:37.650010   33042 pod_ready.go:92] pod "kube-proxy-4rrmn" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:37.650028   33042 pod_ready.go:81] duration metric: took 372.229031ms waiting for pod "kube-proxy-4rrmn" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.650038   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:37.845401   33042 request.go:629] Waited for 195.281404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:33:37.845452   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxw8h
	I1212 20:33:37.845457   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:37.845464   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:37.845470   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:37.848742   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:33:37.848760   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:37.848766   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:37 GMT
	I1212 20:33:37.848771   33042 round_trippers.go:580]     Audit-Id: 00ad6139-850f-498d-840b-0126b554c797
	I1212 20:33:37.848776   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:37.848784   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:37.848793   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:37.848801   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:37.849253   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sxw8h","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f281e87-2597-4bd0-8ca4-cd7556c0a8e4","resourceVersion":"992","creationTimestamp":"2023-12-12T20:20:33Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:20:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1212 20:33:38.046057   33042 request.go:629] Waited for 196.293021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:33:38.046127   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m02
	I1212 20:33:38.046136   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:38.046151   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:38.046164   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:38.048753   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:38.048782   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:38.048793   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:38.048801   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:38.048809   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:38 GMT
	I1212 20:33:38.048817   33042 round_trippers.go:580]     Audit-Id: 1226a0ba-c323-46af-aa1f-5a9f3cbbffe8
	I1212 20:33:38.048824   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:38.048834   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:38.049421   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m02","uid":"376c7e88-3106-4db4-9914-b7b057a0ebe7","resourceVersion":"1188","creationTimestamp":"2023-12-12T20:31:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_33_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:31:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 20:33:38.049783   33042 pod_ready.go:92] pod "kube-proxy-sxw8h" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:38.049805   33042 pod_ready.go:81] duration metric: took 399.757895ms waiting for pod "kube-proxy-sxw8h" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:38.049818   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:38.246254   33042 request.go:629] Waited for 196.364673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:33:38.246351   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xch7v
	I1212 20:33:38.246363   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:38.246375   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:38.246390   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:38.249404   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:38.249431   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:38.249442   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:38.249450   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:38.249458   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:38 GMT
	I1212 20:33:38.249465   33042 round_trippers.go:580]     Audit-Id: 932fd07d-f5af-4b31-b5ba-cc6a91720274
	I1212 20:33:38.249471   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:38.249477   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:38.249628   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xch7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"c47d9b9f-ae3c-4404-a33a-d689c4b3e034","resourceVersion":"1209","creationTimestamp":"2023-12-12T20:21:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e686dba3-c0b3-446b-880e-04da52205ebb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:21:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e686dba3-c0b3-446b-880e-04da52205ebb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1212 20:33:38.446391   33042 request.go:629] Waited for 196.366327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:33:38.446463   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818-m03
	I1212 20:33:38.446468   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:38.446476   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:38.446483   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:38.449722   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:33:38.449749   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:38.449760   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:38.449768   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:38.449777   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:38.449784   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:38 GMT
	I1212 20:33:38.449793   33042 round_trippers.go:580]     Audit-Id: af13a07a-7577-4ddd-be2e-b95731acea31
	I1212 20:33:38.449803   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:38.449901   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818-m03","uid":"94632321-4471-4b6c-b449-cdfe24f82c2b","resourceVersion":"1189","creationTimestamp":"2023-12-12T20:33:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T20_33_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:33:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 20:33:38.450159   33042 pod_ready.go:92] pod "kube-proxy-xch7v" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:38.450174   33042 pod_ready.go:81] duration metric: took 400.344664ms waiting for pod "kube-proxy-xch7v" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:38.450183   33042 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:38.645636   33042 request.go:629] Waited for 195.393465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:33:38.645697   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-562818
	I1212 20:33:38.645704   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:38.645714   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:38.645728   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:38.648236   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:38.648258   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:38.648268   33042 round_trippers.go:580]     Audit-Id: e9d2514a-1852-4eb8-b098-49d8fbcec8df
	I1212 20:33:38.648275   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:38.648280   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:38.648286   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:38.648290   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:38.648296   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:38 GMT
	I1212 20:33:38.648437   33042 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-562818","namespace":"kube-system","uid":"994614e5-3a18-422e-86ad-54c67237293d","resourceVersion":"859","creationTimestamp":"2023-12-12T20:19:34Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.mirror":"7fdc6c1dd71be88c3ada50ca81b581f2","kubernetes.io/config.seen":"2023-12-12T20:19:26.992797913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:19:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 20:33:38.846345   33042 request.go:629] Waited for 197.446577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:38.846423   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/multinode-562818
	I1212 20:33:38.846431   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:38.846441   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:38.846450   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:38.848978   33042 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 20:33:38.849003   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:38.849013   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:38.849021   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:38.849028   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:38.849038   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:38 GMT
	I1212 20:33:38.849046   33042 round_trippers.go:580]     Audit-Id: 5705ed1e-d2f3-4435-bc41-8497d0003714
	I1212 20:33:38.849055   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:38.849386   33042 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T20:19:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 20:33:38.849702   33042 pod_ready.go:92] pod "kube-scheduler-multinode-562818" in "kube-system" namespace has status "Ready":"True"
	I1212 20:33:38.849719   33042 pod_ready.go:81] duration metric: took 399.525405ms waiting for pod "kube-scheduler-multinode-562818" in "kube-system" namespace to be "Ready" ...
	I1212 20:33:38.849730   33042 pod_ready.go:38] duration metric: took 1.6011914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:33:38.849742   33042 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:33:38.849785   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:33:38.864433   33042 system_svc.go:56] duration metric: took 14.682111ms WaitForService to wait for kubelet.
	I1212 20:33:38.864457   33042 kubeadm.go:581] duration metric: took 1.636942694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:33:38.864473   33042 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:33:39.045900   33042 request.go:629] Waited for 181.343348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I1212 20:33:39.045959   33042 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I1212 20:33:39.045965   33042 round_trippers.go:469] Request Headers:
	I1212 20:33:39.045981   33042 round_trippers.go:473]     Accept: application/json, */*
	I1212 20:33:39.045988   33042 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 20:33:39.049349   33042 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 20:33:39.049371   33042 round_trippers.go:577] Response Headers:
	I1212 20:33:39.049378   33042 round_trippers.go:580]     Audit-Id: 444c0ae1-afc9-4c7e-86ff-84b8fd90d35a
	I1212 20:33:39.049383   33042 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 20:33:39.049388   33042 round_trippers.go:580]     Content-Type: application/json
	I1212 20:33:39.049393   33042 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ed76492c-0039-494a-a53f-6789e58f7428
	I1212 20:33:39.049398   33042 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5accf1e6-8758-4d2f-be21-8169683f3d77
	I1212 20:33:39.049403   33042 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:33:39 GMT
	I1212 20:33:39.049870   33042 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1211"},"items":[{"metadata":{"name":"multinode-562818","uid":"6c487c25-b0a4-437c-989c-fee0060c2167","resourceVersion":"869","creationTimestamp":"2023-12-12T20:19:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-562818","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-562818","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T20_19_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16237 chars]
	I1212 20:33:39.050411   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:33:39.050428   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:33:39.050437   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:33:39.050441   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:33:39.050445   33042 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:33:39.050448   33042 node_conditions.go:123] node cpu capacity is 2
	I1212 20:33:39.050454   33042 node_conditions.go:105] duration metric: took 185.974881ms to run NodePressure ...
	I1212 20:33:39.050463   33042 start.go:228] waiting for startup goroutines ...
	I1212 20:33:39.050480   33042 start.go:242] writing updated cluster config ...
	I1212 20:33:39.050738   33042 ssh_runner.go:195] Run: rm -f paused
	I1212 20:33:39.099136   33042 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 20:33:39.101862   33042 out.go:177] * Done! kubectl is now configured to use "multinode-562818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 20:29:27 UTC, ends at Tue 2023-12-12 20:33:40 UTC. --
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.203926786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702413220203777933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=76132f3e-74ae-4904-8f4b-5da34131c0f4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.204991681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=72377493-96b8-40c4-9690-bd499102c600 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.205040571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72377493-96b8-40c4-9690-bd499102c600 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.205251338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b53cbb4109e0ff9d0eb4f5770515ffd7403382085a229cc51e51810a4370618,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702413032835248150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff5612bcfd314d149cb94cd6ea0cd64a69aec302feae727adf96965325f0358,PodSandboxId:a66b99f109daca0bbc70707d14bd1cfc31b77aa08021c1fa606061db7a01f85a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702413010556001207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5323eb4e04cca901caa9b9e649deed2c101f7f397294776a7b76ca235c9ab2,PodSandboxId:eccc1cd53b1dab8153101a2db12c9c9fb9893b1eff66866301dd9cbec35000f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702413009044506994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9f98058515c289b1cbc31428b2427f85db515b82301ac2389639d3a32aec22,PodSandboxId:076cecf581b85fbfc17eb8a0481d781b238350f6c4f46fbc8ee1019298e52b55,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702413004215612596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65cae2db63a95a7acd39457a71f4999fdef908bc04699a1ff93b8300425a8a8,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702413001729761641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda87a836a4f55df5efdb2c6114144daa5e8150e78de7ea9ef1bf020dff6643,PodSandboxId:d2e9abbc4317d0f76c6e91835fadf47802d0e73ad14c0e054054eeaf6c09ad1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702413001677447865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1d5
66fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4bb14e92f84f8c19cca37efe071fe7b6717df9b54b069fbb5622694e083dc1,PodSandboxId:89e7ea28cc88aaddad3b637c4886736532a724eef6f66a92507024b4eb57efcf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412996176624685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffb6e46f46dc6b77e3dd687eefab2c15cd080dea12588d24401450cc13cabd7,PodSandboxId:38d4891104470e229fd08ca50215be7b70f65d1d95ca298d2742f90fa6128942,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412996023430272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes.container.has
h: fcfc309f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8e2cb787433b202416b50ecf850b7959c3af0e2036ef42e18a68bb3ea406be,PodSandboxId:76fc7f684d3f1ea3a2cacb99a55562ad906bcf024e396d7b2f9e04cb4a68e9c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412995736510216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfe30ff1ca4de6670745d6e7deb4553c6aed7b336ec48dd94681cd67f5fc143,PodSandboxId:e3beecf1a2572802b022f4399b309003ad3c792a01e9be71957acb690c79e086,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412995586232694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e8f8cea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72377493-96b8-40c4-9690-bd499102c600 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.258246234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8008e7f6-f46c-4da7-9a36-695e6ec90f5b name=/runtime.v1.RuntimeService/Version
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.258412221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8008e7f6-f46c-4da7-9a36-695e6ec90f5b name=/runtime.v1.RuntimeService/Version
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.259742665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=976f86ed-4718-46ba-b4f4-1f089bc63864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.260141580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702413220260125398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=976f86ed-4718-46ba-b4f4-1f089bc63864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.260691975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dad287cb-4312-495c-938c-23be17b16c4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.260765194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dad287cb-4312-495c-938c-23be17b16c4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.261000385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b53cbb4109e0ff9d0eb4f5770515ffd7403382085a229cc51e51810a4370618,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702413032835248150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff5612bcfd314d149cb94cd6ea0cd64a69aec302feae727adf96965325f0358,PodSandboxId:a66b99f109daca0bbc70707d14bd1cfc31b77aa08021c1fa606061db7a01f85a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702413010556001207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5323eb4e04cca901caa9b9e649deed2c101f7f397294776a7b76ca235c9ab2,PodSandboxId:eccc1cd53b1dab8153101a2db12c9c9fb9893b1eff66866301dd9cbec35000f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702413009044506994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9f98058515c289b1cbc31428b2427f85db515b82301ac2389639d3a32aec22,PodSandboxId:076cecf581b85fbfc17eb8a0481d781b238350f6c4f46fbc8ee1019298e52b55,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702413004215612596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65cae2db63a95a7acd39457a71f4999fdef908bc04699a1ff93b8300425a8a8,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702413001729761641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda87a836a4f55df5efdb2c6114144daa5e8150e78de7ea9ef1bf020dff6643,PodSandboxId:d2e9abbc4317d0f76c6e91835fadf47802d0e73ad14c0e054054eeaf6c09ad1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702413001677447865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1d5
66fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4bb14e92f84f8c19cca37efe071fe7b6717df9b54b069fbb5622694e083dc1,PodSandboxId:89e7ea28cc88aaddad3b637c4886736532a724eef6f66a92507024b4eb57efcf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412996176624685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffb6e46f46dc6b77e3dd687eefab2c15cd080dea12588d24401450cc13cabd7,PodSandboxId:38d4891104470e229fd08ca50215be7b70f65d1d95ca298d2742f90fa6128942,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412996023430272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes.container.has
h: fcfc309f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8e2cb787433b202416b50ecf850b7959c3af0e2036ef42e18a68bb3ea406be,PodSandboxId:76fc7f684d3f1ea3a2cacb99a55562ad906bcf024e396d7b2f9e04cb4a68e9c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412995736510216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfe30ff1ca4de6670745d6e7deb4553c6aed7b336ec48dd94681cd67f5fc143,PodSandboxId:e3beecf1a2572802b022f4399b309003ad3c792a01e9be71957acb690c79e086,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412995586232694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e8f8cea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dad287cb-4312-495c-938c-23be17b16c4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.302667041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f844feec-f1fa-47a5-894e-139084969313 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.302746762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f844feec-f1fa-47a5-894e-139084969313 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.304153080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e0dd671c-aad9-4833-86a7-453e5c1b9dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.304620884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702413220304607408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e0dd671c-aad9-4833-86a7-453e5c1b9dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.305157524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb9d9732-6691-455c-a55a-bda7c4c869d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.305226020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb9d9732-6691-455c-a55a-bda7c4c869d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.305483524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b53cbb4109e0ff9d0eb4f5770515ffd7403382085a229cc51e51810a4370618,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702413032835248150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff5612bcfd314d149cb94cd6ea0cd64a69aec302feae727adf96965325f0358,PodSandboxId:a66b99f109daca0bbc70707d14bd1cfc31b77aa08021c1fa606061db7a01f85a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702413010556001207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5323eb4e04cca901caa9b9e649deed2c101f7f397294776a7b76ca235c9ab2,PodSandboxId:eccc1cd53b1dab8153101a2db12c9c9fb9893b1eff66866301dd9cbec35000f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702413009044506994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9f98058515c289b1cbc31428b2427f85db515b82301ac2389639d3a32aec22,PodSandboxId:076cecf581b85fbfc17eb8a0481d781b238350f6c4f46fbc8ee1019298e52b55,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702413004215612596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65cae2db63a95a7acd39457a71f4999fdef908bc04699a1ff93b8300425a8a8,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702413001729761641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda87a836a4f55df5efdb2c6114144daa5e8150e78de7ea9ef1bf020dff6643,PodSandboxId:d2e9abbc4317d0f76c6e91835fadf47802d0e73ad14c0e054054eeaf6c09ad1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702413001677447865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1d5
66fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4bb14e92f84f8c19cca37efe071fe7b6717df9b54b069fbb5622694e083dc1,PodSandboxId:89e7ea28cc88aaddad3b637c4886736532a724eef6f66a92507024b4eb57efcf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412996176624685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffb6e46f46dc6b77e3dd687eefab2c15cd080dea12588d24401450cc13cabd7,PodSandboxId:38d4891104470e229fd08ca50215be7b70f65d1d95ca298d2742f90fa6128942,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412996023430272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes.container.has
h: fcfc309f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8e2cb787433b202416b50ecf850b7959c3af0e2036ef42e18a68bb3ea406be,PodSandboxId:76fc7f684d3f1ea3a2cacb99a55562ad906bcf024e396d7b2f9e04cb4a68e9c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412995736510216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfe30ff1ca4de6670745d6e7deb4553c6aed7b336ec48dd94681cd67f5fc143,PodSandboxId:e3beecf1a2572802b022f4399b309003ad3c792a01e9be71957acb690c79e086,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412995586232694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e8f8cea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb9d9732-6691-455c-a55a-bda7c4c869d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.347932115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1b6dff50-5018-4eef-90e0-001733acaf1f name=/runtime.v1.RuntimeService/Version
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.348015718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1b6dff50-5018-4eef-90e0-001733acaf1f name=/runtime.v1.RuntimeService/Version
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.349107599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e8a68b4e-3367-4eae-8626-78436cd7d802 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.349682748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702413220349666470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e8a68b4e-3367-4eae-8626-78436cd7d802 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.350099627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=97a494be-85df-4d59-91bc-11e91f3b34c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.350144386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=97a494be-85df-4d59-91bc-11e91f3b34c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:33:40 multinode-562818 crio[711]: time="2023-12-12 20:33:40.350472063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b53cbb4109e0ff9d0eb4f5770515ffd7403382085a229cc51e51810a4370618,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702413032835248150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff5612bcfd314d149cb94cd6ea0cd64a69aec302feae727adf96965325f0358,PodSandboxId:a66b99f109daca0bbc70707d14bd1cfc31b77aa08021c1fa606061db7a01f85a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702413010556001207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9wvsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59edc235-8efb-4eda-85e5-8ef3403bf5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 21bf268b,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5323eb4e04cca901caa9b9e649deed2c101f7f397294776a7b76ca235c9ab2,PodSandboxId:eccc1cd53b1dab8153101a2db12c9c9fb9893b1eff66866301dd9cbec35000f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702413009044506994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-689lp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77852fc-eb8a-4027-98e1-070b4ca43f54,},Annotations:map[string]string{io.kubernetes.container.hash: f914342d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9f98058515c289b1cbc31428b2427f85db515b82301ac2389639d3a32aec22,PodSandboxId:076cecf581b85fbfc17eb8a0481d781b238350f6c4f46fbc8ee1019298e52b55,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702413004215612596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24p9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e80eb9ab-2919-4be1-890d-34c26202f7fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4cedd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65cae2db63a95a7acd39457a71f4999fdef908bc04699a1ff93b8300425a8a8,PodSandboxId:f219bd1d2d050e67a9f13e5b33244aee5c78816d1580e42aa1a0db1ac324f93e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702413001729761641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9efe55ce-d87d-4074-9983-d880908d6d3d,},Annotations:map[string]string{io.kubernetes.container.hash: 159bc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda87a836a4f55df5efdb2c6114144daa5e8150e78de7ea9ef1bf020dff6643,PodSandboxId:d2e9abbc4317d0f76c6e91835fadf47802d0e73ad14c0e054054eeaf6c09ad1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702413001677447865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rrmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcd718f-0c7c-461a-895e-44a0c1d5
66fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44fa12fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4bb14e92f84f8c19cca37efe071fe7b6717df9b54b069fbb5622694e083dc1,PodSandboxId:89e7ea28cc88aaddad3b637c4886736532a724eef6f66a92507024b4eb57efcf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702412996176624685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fdc6c1dd71be88c3ada50ca81b581f2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffb6e46f46dc6b77e3dd687eefab2c15cd080dea12588d24401450cc13cabd7,PodSandboxId:38d4891104470e229fd08ca50215be7b70f65d1d95ca298d2742f90fa6128942,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702412996023430272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e147e28129df59a83fcfb97d45da77e4,},Annotations:map[string]string{io.kubernetes.container.has
h: fcfc309f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8e2cb787433b202416b50ecf850b7959c3af0e2036ef42e18a68bb3ea406be,PodSandboxId:76fc7f684d3f1ea3a2cacb99a55562ad906bcf024e396d7b2f9e04cb4a68e9c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702412995736510216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7cd7c8c41f9e966d5f21f814b258e09,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfe30ff1ca4de6670745d6e7deb4553c6aed7b336ec48dd94681cd67f5fc143,PodSandboxId:e3beecf1a2572802b022f4399b309003ad3c792a01e9be71957acb690c79e086,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702412995586232694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-562818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 193a44f373aa39bf67a4fef20e3c8d27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7e8f8cea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=97a494be-85df-4d59-91bc-11e91f3b34c8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5b53cbb4109e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   f219bd1d2d050       storage-provisioner
	dff5612bcfd31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   a66b99f109dac       busybox-5bc68d56bd-9wvsx
	0d5323eb4e04c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   eccc1cd53b1da       coredns-5dd5756b68-689lp
	db9f98058515c       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   076cecf581b85       kindnet-24p9c
	b65cae2db63a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   f219bd1d2d050       storage-provisioner
	3bda87a836a4f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   d2e9abbc4317d       kube-proxy-4rrmn
	5b4bb14e92f84       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   89e7ea28cc88a       kube-scheduler-multinode-562818
	4ffb6e46f46dc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   38d4891104470       etcd-multinode-562818
	8c8e2cb787433       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   76fc7f684d3f1       kube-controller-manager-multinode-562818
	bdfe30ff1ca4d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   e3beecf1a2572       kube-apiserver-multinode-562818
	
	
	==> coredns [0d5323eb4e04cca901caa9b9e649deed2c101f7f397294776a7b76ca235c9ab2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60032 - 56610 "HINFO IN 3819806204763466414.3564727269435532371. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010395448s
	
	
	==> describe nodes <==
	Name:               multinode-562818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-562818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-562818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T20_19_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:19:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-562818
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:33:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:30:31 +0000   Tue, 12 Dec 2023 20:19:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:30:31 +0000   Tue, 12 Dec 2023 20:19:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:30:31 +0000   Tue, 12 Dec 2023 20:19:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:30:31 +0000   Tue, 12 Dec 2023 20:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    multinode-562818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 477de0dffe274051ae282f465573daea
	  System UUID:                477de0df-fe27-4051-ae28-2f465573daea
	  Boot ID:                    352e5050-71e0-4d4b-8be6-74fa4ac53e45
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9wvsx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-689lp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-562818                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-24p9c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-562818             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-562818    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-4rrmn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-562818             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-562818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-562818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-562818 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-562818 event: Registered Node multinode-562818 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-562818 status is now: NodeReady
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m46s)  kubelet          Node multinode-562818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m46s)  kubelet          Node multinode-562818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x7 over 3m46s)  kubelet          Node multinode-562818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node multinode-562818 event: Registered Node multinode-562818 in Controller
	
	
	Name:               multinode-562818-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-562818-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-562818
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T20_33_36_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:31:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-562818-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:33:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:31:55 +0000   Tue, 12 Dec 2023 20:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:31:55 +0000   Tue, 12 Dec 2023 20:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:31:55 +0000   Tue, 12 Dec 2023 20:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:31:55 +0000   Tue, 12 Dec 2023 20:31:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    multinode-562818-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bca6f1b61c874e68865500389e098c63
	  System UUID:                bca6f1b6-1c87-4e68-8655-00389e098c63
	  Boot ID:                    f4dccdc7-2ac9-4612-b312-e1bdd16bc5ef
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-vrjwk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-cmz7d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-sxw8h            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 106s                 kube-proxy  
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   NodeNotReady             2m48s                kubelet     Node multinode-562818-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m9s (x2 over 3m9s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeReady                107s (x2 over 12m)   kubelet     Node multinode-562818-m02 status is now: NodeReady
	  Normal   NodeNotSchedulable       107s                 kubelet     Node multinode-562818-m02 status is now: NodeNotSchedulable
	  Normal   NodeHasSufficientMemory  106s (x6 over 13m)   kubelet     Node multinode-562818-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     106s (x6 over 13m)   kubelet     Node multinode-562818-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    106s (x6 over 13m)   kubelet     Node multinode-562818-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeSchedulable          106s                 kubelet     Node multinode-562818-m02 status is now: NodeSchedulable
	  Normal   Starting                 105s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)  kubelet     Node multinode-562818-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet     Node multinode-562818-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)  kubelet     Node multinode-562818-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                 kubelet     Node multinode-562818-m02 status is now: NodeReady
	
	
	Name:               multinode-562818-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-562818-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-562818
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T20_33_36_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-562818-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:33:36 +0000   Tue, 12 Dec 2023 20:33:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:33:36 +0000   Tue, 12 Dec 2023 20:33:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:33:36 +0000   Tue, 12 Dec 2023 20:33:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:33:36 +0000   Tue, 12 Dec 2023 20:33:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    multinode-562818-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8ab5802865b49b0ac0dce69c9a445f6
	  System UUID:                d8ab5802-865b-49b0-ac0d-ce69c9a445f6
	  Boot ID:                    4e8d0607-93c5-4458-a091-c4560500abd7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-98xh8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-q7n6w               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-xch7v            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 2s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-562818-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-562818-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-562818-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-562818-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-562818-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-562818-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-562818-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-562818-m03 status is now: NodeReady
	  Normal   NodeNotReady             67s                kubelet     Node multinode-562818-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        35s (x2 over 95s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       6s                 kubelet     Node multinode-562818-m03 status is now: NodeNotSchedulable
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-562818-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-562818-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-562818-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-562818-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec12 20:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067357] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.378985] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.440508] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147194] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.550474] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.387423] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.102366] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.164340] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.121227] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.216903] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +17.728159] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	
	
	==> etcd [4ffb6e46f46dc6b77e3dd687eefab2c15cd080dea12588d24401450cc13cabd7] <==
	{"level":"info","ts":"2023-12-12T20:29:57.701705Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:29:57.701744Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:29:57.701994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 switched to configuration voters=(2477931171060957778)"}
	{"level":"info","ts":"2023-12-12T20:29:57.702116Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","added-peer-id":"226361457cf4c252","added-peer-peer-urls":["https://192.168.39.77:2380"]}
	{"level":"info","ts":"2023-12-12T20:29:57.702214Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:29:57.702253Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:29:57.713751Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T20:29:57.715523Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"226361457cf4c252","initial-advertise-peer-urls":["https://192.168.39.77:2380"],"listen-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T20:29:57.71559Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T20:29:57.715696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2023-12-12T20:29:57.715724Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2023-12-12T20:29:59.067664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T20:29:59.067769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T20:29:59.067809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgPreVoteResp from 226361457cf4c252 at term 2"}
	{"level":"info","ts":"2023-12-12T20:29:59.06784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T20:29:59.067865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgVoteResp from 226361457cf4c252 at term 3"}
	{"level":"info","ts":"2023-12-12T20:29:59.067901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T20:29:59.067929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226361457cf4c252 elected leader 226361457cf4c252 at term 3"}
	{"level":"info","ts":"2023-12-12T20:29:59.06941Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"226361457cf4c252","local-member-attributes":"{Name:multinode-562818 ClientURLs:[https://192.168.39.77:2379]}","request-path":"/0/members/226361457cf4c252/attributes","cluster-id":"b43d13dd46d94ad8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:29:59.069601Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:29:59.069629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:29:59.071012Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:29:59.071028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.77:2379"}
	{"level":"info","ts":"2023-12-12T20:29:59.07128Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:29:59.071318Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:33:40 up 4 min,  0 users,  load average: 0.12, 0.15, 0.08
	Linux multinode-562818 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [db9f98058515c289b1cbc31428b2427f85db515b82301ac2389639d3a32aec22] <==
	I1212 20:32:55.789151       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	I1212 20:32:55.789264       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I1212 20:32:55.789270       1 main.go:250] Node multinode-562818-m03 has CIDR [10.244.3.0/24] 
	I1212 20:33:05.794082       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:33:05.794131       1 main.go:227] handling current node
	I1212 20:33:05.794151       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 20:33:05.794157       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	I1212 20:33:05.794261       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I1212 20:33:05.794293       1 main.go:250] Node multinode-562818-m03 has CIDR [10.244.3.0/24] 
	I1212 20:33:15.800391       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:33:15.803513       1 main.go:227] handling current node
	I1212 20:33:15.803546       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 20:33:15.803584       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	I1212 20:33:15.803707       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I1212 20:33:15.803734       1 main.go:250] Node multinode-562818-m03 has CIDR [10.244.3.0/24] 
	I1212 20:33:25.882972       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:33:25.883091       1 main.go:227] handling current node
	I1212 20:33:25.883127       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 20:33:25.883145       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	I1212 20:33:25.883312       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I1212 20:33:25.883424       1 main.go:250] Node multinode-562818-m03 has CIDR [10.244.3.0/24] 
	I1212 20:33:35.889631       1 main.go:223] Handling node with IPs: map[192.168.39.77:{}]
	I1212 20:33:35.889698       1 main.go:227] handling current node
	I1212 20:33:35.889714       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 20:33:35.889720       1 main.go:250] Node multinode-562818-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [bdfe30ff1ca4de6670745d6e7deb4553c6aed7b336ec48dd94681cd67f5fc143] <==
	I1212 20:30:00.443540       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 20:30:00.443710       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 20:30:00.446435       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1212 20:30:00.446515       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1212 20:30:00.485793       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:30:00.489430       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 20:30:00.497211       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1212 20:30:00.510275       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:30:00.546712       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:30:00.550872       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:30:00.550905       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:30:00.550930       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:30:00.550956       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:30:00.573135       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:30:00.601463       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 20:30:00.601502       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:30:00.601772       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:30:00.601935       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:30:01.400186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:30:03.114047       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 20:30:03.281388       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:30:03.298284       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:30:03.375676       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:30:03.387138       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:30:50.355821       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8c8e2cb787433b202416b50ecf850b7959c3af0e2036ef42e18a68bb3ea406be] <==
	I1212 20:31:54.930249       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m03"
	I1212 20:31:55.574392       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m03"
	I1212 20:31:55.576977       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-562818-m02\" does not exist"
	I1212 20:31:55.577456       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-vbpn5" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-vbpn5"
	I1212 20:31:55.588904       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-562818-m02" podCIDRs=["10.244.1.0/24"]
	I1212 20:31:55.714263       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m02"
	I1212 20:31:56.493053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.212µs"
	I1212 20:32:07.742563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="243.037µs"
	I1212 20:32:08.348733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="89.226µs"
	I1212 20:32:08.355304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.325µs"
	I1212 20:32:33.600307       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m02"
	I1212 20:33:32.384146       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-vrjwk"
	I1212 20:33:32.390654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.951399ms"
	I1212 20:33:32.408309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.686916ms"
	I1212 20:33:32.408957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="134.417µs"
	I1212 20:33:32.409099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.594µs"
	I1212 20:33:33.618766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.500501ms"
	I1212 20:33:33.619749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.696µs"
	I1212 20:33:35.386477       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m02"
	I1212 20:33:36.106589       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m02"
	I1212 20:33:36.106860       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-562818-m03\" does not exist"
	I1212 20:33:36.106963       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-98xh8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-98xh8"
	I1212 20:33:36.131766       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-562818-m03" podCIDRs=["10.244.2.0/24"]
	I1212 20:33:36.177013       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-562818-m02"
	I1212 20:33:37.036502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.549µs"
	
	
	==> kube-proxy [3bda87a836a4f55df5efdb2c6114144daa5e8150e78de7ea9ef1bf020dff6643] <==
	I1212 20:30:01.975303       1 server_others.go:69] "Using iptables proxy"
	I1212 20:30:01.985838       1 node.go:141] Successfully retrieved node IP: 192.168.39.77
	I1212 20:30:02.045954       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:30:02.046074       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:30:02.049636       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:30:02.049699       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:30:02.050028       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:30:02.050061       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:30:02.050871       1 config.go:188] "Starting service config controller"
	I1212 20:30:02.050921       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:30:02.050948       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:30:02.050978       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:30:02.052735       1 config.go:315] "Starting node config controller"
	I1212 20:30:02.052774       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:30:02.151974       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:30:02.152059       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:30:02.153240       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5b4bb14e92f84f8c19cca37efe071fe7b6717df9b54b069fbb5622694e083dc1] <==
	I1212 20:29:58.172579       1 serving.go:348] Generated self-signed cert in-memory
	W1212 20:30:00.494468       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:30:00.494619       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:30:00.494651       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:30:00.494748       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:30:00.557008       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 20:30:00.557060       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:30:00.563026       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 20:30:00.571659       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:30:00.571724       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:30:00.571750       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 20:30:00.673785       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:29:27 UTC, ends at Tue 2023-12-12 20:33:41 UTC. --
	Dec 12 20:30:02 multinode-562818 kubelet[916]: E1212 20:30:02.396018     916 projected.go:198] Error preparing data for projected volume kube-api-access-jxh85 for pod default/busybox-5bc68d56bd-9wvsx: object "default"/"kube-root-ca.crt" not registered
	Dec 12 20:30:02 multinode-562818 kubelet[916]: E1212 20:30:02.396072     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59edc235-8efb-4eda-85e5-8ef3403bf5f3-kube-api-access-jxh85 podName:59edc235-8efb-4eda-85e5-8ef3403bf5f3 nodeName:}" failed. No retries permitted until 2023-12-12 20:30:04.396057778 +0000 UTC m=+10.077671023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jxh85" (UniqueName: "kubernetes.io/projected/59edc235-8efb-4eda-85e5-8ef3403bf5f3-kube-api-access-jxh85") pod "busybox-5bc68d56bd-9wvsx" (UID: "59edc235-8efb-4eda-85e5-8ef3403bf5f3") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 20:30:02 multinode-562818 kubelet[916]: E1212 20:30:02.607972     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-689lp" podUID="e77852fc-eb8a-4027-98e1-070b4ca43f54"
	Dec 12 20:30:02 multinode-562818 kubelet[916]: E1212 20:30:02.608517     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-9wvsx" podUID="59edc235-8efb-4eda-85e5-8ef3403bf5f3"
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.312520     916 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.312616     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e77852fc-eb8a-4027-98e1-070b4ca43f54-config-volume podName:e77852fc-eb8a-4027-98e1-070b4ca43f54 nodeName:}" failed. No retries permitted until 2023-12-12 20:30:08.312598588 +0000 UTC m=+13.994211836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e77852fc-eb8a-4027-98e1-070b4ca43f54-config-volume") pod "coredns-5dd5756b68-689lp" (UID: "e77852fc-eb8a-4027-98e1-070b4ca43f54") : object "kube-system"/"coredns" not registered
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.413025     916 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.413054     916 projected.go:198] Error preparing data for projected volume kube-api-access-jxh85 for pod default/busybox-5bc68d56bd-9wvsx: object "default"/"kube-root-ca.crt" not registered
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.413105     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59edc235-8efb-4eda-85e5-8ef3403bf5f3-kube-api-access-jxh85 podName:59edc235-8efb-4eda-85e5-8ef3403bf5f3 nodeName:}" failed. No retries permitted until 2023-12-12 20:30:08.413090019 +0000 UTC m=+14.094703267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jxh85" (UniqueName: "kubernetes.io/projected/59edc235-8efb-4eda-85e5-8ef3403bf5f3-kube-api-access-jxh85") pod "busybox-5bc68d56bd-9wvsx" (UID: "59edc235-8efb-4eda-85e5-8ef3403bf5f3") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.609548     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-9wvsx" podUID="59edc235-8efb-4eda-85e5-8ef3403bf5f3"
	Dec 12 20:30:04 multinode-562818 kubelet[916]: E1212 20:30:04.609643     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-689lp" podUID="e77852fc-eb8a-4027-98e1-070b4ca43f54"
	Dec 12 20:30:05 multinode-562818 kubelet[916]: I1212 20:30:05.797413     916 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 20:30:32 multinode-562818 kubelet[916]: I1212 20:30:32.811290     916 scope.go:117] "RemoveContainer" containerID="b65cae2db63a95a7acd39457a71f4999fdef908bc04699a1ff93b8300425a8a8"
	Dec 12 20:30:54 multinode-562818 kubelet[916]: E1212 20:30:54.623967     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 20:30:54 multinode-562818 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 20:30:54 multinode-562818 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 20:30:54 multinode-562818 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 20:31:54 multinode-562818 kubelet[916]: E1212 20:31:54.623658     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 20:31:54 multinode-562818 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 20:31:54 multinode-562818 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 20:31:54 multinode-562818 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 20:32:54 multinode-562818 kubelet[916]: E1212 20:32:54.638658     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 20:32:54 multinode-562818 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 20:32:54 multinode-562818 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 20:32:54 multinode-562818 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-562818 -n multinode-562818
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-562818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (686.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 stop
E1212 20:33:56.433591   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:34:39.385093   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-562818 stop: exit status 82 (2m1.439839016s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-562818"  ...
	* Stopping node "multinode-562818"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-562818 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-562818 status: exit status 3 (18.697801086s)

                                                
                                                
-- stdout --
	multinode-562818
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-562818-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:36:03.555610   35278 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	E1212 20:36:03.555648   35278 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-562818 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-562818 -n multinode-562818
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-562818 -n multinode-562818: exit status 3 (3.188261443s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:36:06.915555   35376 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	E1212 20:36:06.915580   35376 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-562818" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.33s)

                                                
                                    
x
+
TestPreload (186.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-824561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1212 20:44:39.384978   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-824561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.555044504s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-824561 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-824561 image pull gcr.io/k8s-minikube/busybox: (1.229447309s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-824561
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-824561: (7.106357642s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-824561 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1212 20:46:48.881325   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:46:59.479913   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-824561 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.752477835s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-824561 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-12-12 20:47:30.359092858 +0000 UTC m=+3051.579265442
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-824561 -n test-preload-824561
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-824561 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-824561 logs -n 25: (1.149963955s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n multinode-562818 sudo cat                                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /home/docker/cp-test_multinode-562818-m03_multinode-562818.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt                       | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m02:/home/docker/cp-test_multinode-562818-m03_multinode-562818-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n                                                                 | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | multinode-562818-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-562818 ssh -n multinode-562818-m02 sudo cat                                   | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	|         | /home/docker/cp-test_multinode-562818-m03_multinode-562818-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-562818 node stop m03                                                          | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:21 UTC |
	| node    | multinode-562818 node start                                                             | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:21 UTC | 12 Dec 23 20:22 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-562818                                                                | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:22 UTC |                     |
	| stop    | -p multinode-562818                                                                     | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:22 UTC |                     |
	| start   | -p multinode-562818                                                                     | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:24 UTC | 12 Dec 23 20:33 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-562818                                                                | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:33 UTC |                     |
	| node    | multinode-562818 node delete                                                            | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:33 UTC | 12 Dec 23 20:33 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-562818 stop                                                                   | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:33 UTC |                     |
	| start   | -p multinode-562818                                                                     | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:36 UTC | 12 Dec 23 20:43 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-562818                                                                | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:43 UTC |                     |
	| start   | -p multinode-562818-m02                                                                 | multinode-562818-m02 | jenkins | v1.32.0 | 12 Dec 23 20:43 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-562818-m03                                                                 | multinode-562818-m03 | jenkins | v1.32.0 | 12 Dec 23 20:43 UTC | 12 Dec 23 20:44 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-562818                                                                 | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:44 UTC |                     |
	| delete  | -p multinode-562818-m03                                                                 | multinode-562818-m03 | jenkins | v1.32.0 | 12 Dec 23 20:44 UTC | 12 Dec 23 20:44 UTC |
	| delete  | -p multinode-562818                                                                     | multinode-562818     | jenkins | v1.32.0 | 12 Dec 23 20:44 UTC | 12 Dec 23 20:44 UTC |
	| start   | -p test-preload-824561                                                                  | test-preload-824561  | jenkins | v1.32.0 | 12 Dec 23 20:44 UTC | 12 Dec 23 20:46 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-824561 image pull                                                          | test-preload-824561  | jenkins | v1.32.0 | 12 Dec 23 20:46 UTC | 12 Dec 23 20:46 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-824561                                                                  | test-preload-824561  | jenkins | v1.32.0 | 12 Dec 23 20:46 UTC | 12 Dec 23 20:46 UTC |
	| start   | -p test-preload-824561                                                                  | test-preload-824561  | jenkins | v1.32.0 | 12 Dec 23 20:46 UTC | 12 Dec 23 20:47 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-824561 image list                                                          | test-preload-824561  | jenkins | v1.32.0 | 12 Dec 23 20:47 UTC | 12 Dec 23 20:47 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 20:46:13
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:46:13.427893   38081 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:46:13.428048   38081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:46:13.428073   38081 out.go:309] Setting ErrFile to fd 2...
	I1212 20:46:13.428082   38081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:46:13.428294   38081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:46:13.428827   38081 out.go:303] Setting JSON to false
	I1212 20:46:13.429686   38081 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5327,"bootTime":1702408646,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:46:13.429743   38081 start.go:138] virtualization: kvm guest
	I1212 20:46:13.432298   38081 out.go:177] * [test-preload-824561] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:46:13.433972   38081 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:46:13.435495   38081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:46:13.434031   38081 notify.go:220] Checking for updates...
	I1212 20:46:13.438577   38081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:46:13.440149   38081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:46:13.441629   38081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:46:13.442954   38081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:46:13.444931   38081 config.go:182] Loaded profile config "test-preload-824561": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 20:46:13.445325   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:46:13.445368   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:46:13.459079   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
	I1212 20:46:13.459471   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:46:13.459973   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:46:13.459995   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:46:13.460285   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:46:13.460460   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:13.462648   38081 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 20:46:13.463958   38081 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:46:13.464252   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:46:13.464289   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:46:13.478729   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I1212 20:46:13.479172   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:46:13.479615   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:46:13.479636   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:46:13.480098   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:46:13.480300   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:13.516814   38081 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 20:46:13.518180   38081 start.go:298] selected driver: kvm2
	I1212 20:46:13.518194   38081 start.go:902] validating driver "kvm2" against &{Name:test-preload-824561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-824561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:46:13.518312   38081 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:46:13.518972   38081 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:46:13.519054   38081 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 20:46:13.534124   38081 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 20:46:13.534532   38081 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:46:13.534611   38081 cni.go:84] Creating CNI manager for ""
	I1212 20:46:13.534628   38081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:46:13.534643   38081 start_flags.go:323] config:
	{Name:test-preload-824561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-824561 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:46:13.534862   38081 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:46:13.536857   38081 out.go:177] * Starting control plane node test-preload-824561 in cluster test-preload-824561
	I1212 20:46:13.538103   38081 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 20:46:13.560981   38081 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1212 20:46:13.561022   38081 cache.go:56] Caching tarball of preloaded images
	I1212 20:46:13.561191   38081 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 20:46:13.562775   38081 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1212 20:46:13.564006   38081 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 20:46:13.590495   38081 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1212 20:46:20.009294   38081 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 20:46:20.009390   38081 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 20:46:20.910674   38081 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1212 20:46:20.910797   38081 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/config.json ...
	I1212 20:46:20.911012   38081 start.go:365] acquiring machines lock for test-preload-824561: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:46:20.911080   38081 start.go:369] acquired machines lock for "test-preload-824561" in 41.41µs
	I1212 20:46:20.911091   38081 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:46:20.911098   38081 fix.go:54] fixHost starting: 
	I1212 20:46:20.911376   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:46:20.911409   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:46:20.925321   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
	I1212 20:46:20.925738   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:46:20.926118   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:46:20.926142   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:46:20.926491   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:46:20.926700   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:20.926856   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetState
	I1212 20:46:20.928438   38081 fix.go:102] recreateIfNeeded on test-preload-824561: state=Stopped err=<nil>
	I1212 20:46:20.928457   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	W1212 20:46:20.928579   38081 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:46:20.932315   38081 out.go:177] * Restarting existing kvm2 VM for "test-preload-824561" ...
	I1212 20:46:20.933697   38081 main.go:141] libmachine: (test-preload-824561) Calling .Start
	I1212 20:46:20.933877   38081 main.go:141] libmachine: (test-preload-824561) Ensuring networks are active...
	I1212 20:46:20.934707   38081 main.go:141] libmachine: (test-preload-824561) Ensuring network default is active
	I1212 20:46:20.935002   38081 main.go:141] libmachine: (test-preload-824561) Ensuring network mk-test-preload-824561 is active
	I1212 20:46:20.935377   38081 main.go:141] libmachine: (test-preload-824561) Getting domain xml...
	I1212 20:46:20.936011   38081 main.go:141] libmachine: (test-preload-824561) Creating domain...
	I1212 20:46:22.152934   38081 main.go:141] libmachine: (test-preload-824561) Waiting to get IP...
	I1212 20:46:22.153808   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:22.154224   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:22.154290   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:22.154190   38127 retry.go:31] will retry after 274.424872ms: waiting for machine to come up
	I1212 20:46:22.430841   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:22.431365   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:22.431388   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:22.431314   38127 retry.go:31] will retry after 331.810374ms: waiting for machine to come up
	I1212 20:46:22.765035   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:22.765416   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:22.765447   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:22.765358   38127 retry.go:31] will retry after 476.493562ms: waiting for machine to come up
	I1212 20:46:23.243062   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:23.243472   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:23.243504   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:23.243417   38127 retry.go:31] will retry after 376.242613ms: waiting for machine to come up
	I1212 20:46:23.620837   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:23.621256   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:23.621286   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:23.621201   38127 retry.go:31] will retry after 723.559164ms: waiting for machine to come up
	I1212 20:46:24.346000   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:24.346370   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:24.346393   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:24.346322   38127 retry.go:31] will retry after 658.540852ms: waiting for machine to come up
	I1212 20:46:25.006453   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:25.006942   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:25.006970   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:25.006896   38127 retry.go:31] will retry after 785.948845ms: waiting for machine to come up
	I1212 20:46:25.794785   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:25.795229   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:25.795275   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:25.795177   38127 retry.go:31] will retry after 1.17490441s: waiting for machine to come up
	I1212 20:46:26.971973   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:26.972445   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:26.972476   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:26.972384   38127 retry.go:31] will retry after 1.584646742s: waiting for machine to come up
	I1212 20:46:28.559132   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:28.559645   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:28.559675   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:28.559600   38127 retry.go:31] will retry after 2.254428113s: waiting for machine to come up
	I1212 20:46:30.815193   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:30.815695   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:30.815727   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:30.815618   38127 retry.go:31] will retry after 2.704534541s: waiting for machine to come up
	I1212 20:46:33.522820   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:33.523191   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:33.523222   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:33.523132   38127 retry.go:31] will retry after 2.884987695s: waiting for machine to come up
	I1212 20:46:36.409369   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:36.409729   38081 main.go:141] libmachine: (test-preload-824561) DBG | unable to find current IP address of domain test-preload-824561 in network mk-test-preload-824561
	I1212 20:46:36.409757   38081 main.go:141] libmachine: (test-preload-824561) DBG | I1212 20:46:36.409679   38127 retry.go:31] will retry after 3.625359307s: waiting for machine to come up
	I1212 20:46:40.039397   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.039740   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has current primary IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.039758   38081 main.go:141] libmachine: (test-preload-824561) Found IP for machine: 192.168.39.111
	I1212 20:46:40.039767   38081 main.go:141] libmachine: (test-preload-824561) Reserving static IP address...
	I1212 20:46:40.040141   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "test-preload-824561", mac: "52:54:00:fd:76:bf", ip: "192.168.39.111"} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.040173   38081 main.go:141] libmachine: (test-preload-824561) DBG | skip adding static IP to network mk-test-preload-824561 - found existing host DHCP lease matching {name: "test-preload-824561", mac: "52:54:00:fd:76:bf", ip: "192.168.39.111"}
	I1212 20:46:40.040185   38081 main.go:141] libmachine: (test-preload-824561) Reserved static IP address: 192.168.39.111
	I1212 20:46:40.040207   38081 main.go:141] libmachine: (test-preload-824561) Waiting for SSH to be available...
	I1212 20:46:40.040220   38081 main.go:141] libmachine: (test-preload-824561) DBG | Getting to WaitForSSH function...
	I1212 20:46:40.042101   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.042400   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.042422   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.042525   38081 main.go:141] libmachine: (test-preload-824561) DBG | Using SSH client type: external
	I1212 20:46:40.042554   38081 main.go:141] libmachine: (test-preload-824561) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa (-rw-------)
	I1212 20:46:40.042585   38081 main.go:141] libmachine: (test-preload-824561) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 20:46:40.042602   38081 main.go:141] libmachine: (test-preload-824561) DBG | About to run SSH command:
	I1212 20:46:40.042635   38081 main.go:141] libmachine: (test-preload-824561) DBG | exit 0
	I1212 20:46:40.130842   38081 main.go:141] libmachine: (test-preload-824561) DBG | SSH cmd err, output: <nil>: 
	I1212 20:46:40.131159   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetConfigRaw
	I1212 20:46:40.131870   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetIP
	I1212 20:46:40.134248   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.134587   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.134616   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.134862   38081 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/config.json ...
	I1212 20:46:40.135047   38081 machine.go:88] provisioning docker machine ...
	I1212 20:46:40.135064   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:40.135289   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetMachineName
	I1212 20:46:40.135443   38081 buildroot.go:166] provisioning hostname "test-preload-824561"
	I1212 20:46:40.135464   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetMachineName
	I1212 20:46:40.135599   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:40.137670   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.137970   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.137993   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.138117   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:40.138273   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:40.138445   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:40.138576   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:40.138706   38081 main.go:141] libmachine: Using SSH client type: native
	I1212 20:46:40.139051   38081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1212 20:46:40.139071   38081 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-824561 && echo "test-preload-824561" | sudo tee /etc/hostname
	I1212 20:46:40.266935   38081 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-824561
	
	I1212 20:46:40.266966   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:40.269518   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.269888   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.269918   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.270244   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:40.270480   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:40.270658   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:40.270816   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:40.270974   38081 main.go:141] libmachine: Using SSH client type: native
	I1212 20:46:40.271355   38081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1212 20:46:40.271382   38081 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-824561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-824561/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-824561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:46:40.395077   38081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:46:40.395110   38081 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:46:40.395149   38081 buildroot.go:174] setting up certificates
	I1212 20:46:40.395158   38081 provision.go:83] configureAuth start
	I1212 20:46:40.395168   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetMachineName
	I1212 20:46:40.395467   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetIP
	I1212 20:46:40.397787   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.398107   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.398127   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.398282   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:40.400547   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.400909   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.400943   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.401088   38081 provision.go:138] copyHostCerts
	I1212 20:46:40.401150   38081 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:46:40.401164   38081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:46:40.401223   38081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:46:40.401308   38081 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:46:40.401321   38081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:46:40.401346   38081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:46:40.401397   38081 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:46:40.401404   38081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:46:40.401423   38081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:46:40.401465   38081 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.test-preload-824561 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube test-preload-824561]
	I1212 20:46:40.971595   38081 provision.go:172] copyRemoteCerts
	I1212 20:46:40.971659   38081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:46:40.971681   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:40.974425   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.974754   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:40.974780   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:40.974972   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:40.975171   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:40.975375   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:40.975506   38081 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa Username:docker}
	I1212 20:46:41.061287   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:46:41.085462   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 20:46:41.107889   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:46:41.129268   38081 provision.go:86] duration metric: configureAuth took 734.09761ms
	I1212 20:46:41.129301   38081 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:46:41.129516   38081 config.go:182] Loaded profile config "test-preload-824561": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 20:46:41.129597   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:41.132145   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.132518   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:41.132550   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.132840   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:41.133041   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.133167   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.133300   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:41.133474   38081 main.go:141] libmachine: Using SSH client type: native
	I1212 20:46:41.133784   38081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1212 20:46:41.133800   38081 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:46:41.433339   38081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:46:41.433370   38081 machine.go:91] provisioned docker machine in 1.298310088s
	I1212 20:46:41.433386   38081 start.go:300] post-start starting for "test-preload-824561" (driver="kvm2")
	I1212 20:46:41.433415   38081 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:46:41.433436   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:41.433730   38081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:46:41.433755   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:41.436438   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.436801   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:41.436832   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.436974   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:41.437181   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.437359   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:41.437477   38081 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa Username:docker}
	I1212 20:46:41.524905   38081 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:46:41.529233   38081 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 20:46:41.529256   38081 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:46:41.529332   38081 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:46:41.529432   38081 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:46:41.529543   38081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:46:41.537934   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:46:41.560772   38081 start.go:303] post-start completed in 127.370947ms
	I1212 20:46:41.560805   38081 fix.go:56] fixHost completed within 20.649707115s
	I1212 20:46:41.560833   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:41.563737   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.564192   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:41.564225   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.564352   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:41.564563   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.564734   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.564868   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:41.565059   38081 main.go:141] libmachine: Using SSH client type: native
	I1212 20:46:41.565390   38081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1212 20:46:41.565407   38081 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 20:46:41.684132   38081 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702414001.635275676
	
	I1212 20:46:41.684176   38081 fix.go:206] guest clock: 1702414001.635275676
	I1212 20:46:41.684186   38081 fix.go:219] Guest: 2023-12-12 20:46:41.635275676 +0000 UTC Remote: 2023-12-12 20:46:41.560809907 +0000 UTC m=+28.182948441 (delta=74.465769ms)
	I1212 20:46:41.684206   38081 fix.go:190] guest clock delta is within tolerance: 74.465769ms
	I1212 20:46:41.684211   38081 start.go:83] releasing machines lock for "test-preload-824561", held for 20.773124788s
	I1212 20:46:41.684230   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:41.684491   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetIP
	I1212 20:46:41.687051   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.687391   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:41.687422   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.687549   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:41.688002   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:41.688181   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:46:41.688292   38081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:46:41.688330   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:41.688426   38081 ssh_runner.go:195] Run: cat /version.json
	I1212 20:46:41.688455   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:46:41.690802   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.690823   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.691221   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:41.691264   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.691298   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:41.691316   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:41.691439   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:41.691615   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.691617   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:46:41.691786   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:46:41.691801   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:41.691937   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:46:41.691975   38081 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa Username:docker}
	I1212 20:46:41.692065   38081 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa Username:docker}
	I1212 20:46:41.784645   38081 ssh_runner.go:195] Run: systemctl --version
	I1212 20:46:41.805698   38081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:46:41.949885   38081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:46:41.955918   38081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:46:41.955973   38081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:46:41.973259   38081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:46:41.973284   38081 start.go:475] detecting cgroup driver to use...
	I1212 20:46:41.973350   38081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:46:41.988396   38081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:46:42.001802   38081 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:46:42.001852   38081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:46:42.015158   38081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:46:42.028377   38081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:46:42.128245   38081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:46:42.237824   38081 docker.go:219] disabling docker service ...
	I1212 20:46:42.237900   38081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:46:42.251032   38081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:46:42.263365   38081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:46:42.365808   38081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:46:42.466741   38081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:46:42.480774   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:46:42.498605   38081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1212 20:46:42.498685   38081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:46:42.508492   38081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:46:42.508585   38081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:46:42.518318   38081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:46:42.528179   38081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:46:42.537889   38081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:46:42.547814   38081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:46:42.556278   38081 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:46:42.556349   38081 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 20:46:42.568633   38081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:46:42.578558   38081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:46:42.677923   38081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:46:42.858989   38081 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:46:42.859067   38081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:46:42.863869   38081 start.go:543] Will wait 60s for crictl version
	I1212 20:46:42.863922   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:42.867776   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:46:42.905306   38081 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 20:46:42.905390   38081 ssh_runner.go:195] Run: crio --version
	I1212 20:46:42.955006   38081 ssh_runner.go:195] Run: crio --version
	I1212 20:46:43.004622   38081 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1212 20:46:43.006157   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetIP
	I1212 20:46:43.008676   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:43.009036   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:46:43.009071   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:46:43.009250   38081 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:46:43.013351   38081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:46:43.026252   38081 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 20:46:43.026330   38081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:46:43.064340   38081 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1212 20:46:43.064400   38081 ssh_runner.go:195] Run: which lz4
	I1212 20:46:43.068531   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 20:46:43.072665   38081 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:46:43.072713   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1212 20:46:44.926554   38081 crio.go:444] Took 1.858071 seconds to copy over tarball
	I1212 20:46:44.926633   38081 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:46:47.845170   38081 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.918506762s)
	I1212 20:46:47.845200   38081 crio.go:451] Took 2.918615 seconds to extract the tarball
	I1212 20:46:47.845210   38081 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:46:47.886072   38081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:46:47.935782   38081 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1212 20:46:47.935806   38081 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 20:46:47.935865   38081 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:46:47.935901   38081 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 20:46:47.935936   38081 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 20:46:47.935975   38081 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 20:46:47.936038   38081 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1212 20:46:47.936093   38081 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 20:46:47.935996   38081 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1212 20:46:47.936216   38081 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 20:46:47.937220   38081 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 20:46:47.937224   38081 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 20:46:47.937244   38081 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 20:46:47.937263   38081 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 20:46:47.937272   38081 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1212 20:46:47.937291   38081 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1212 20:46:47.937324   38081 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 20:46:47.937361   38081 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:46:48.115478   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1212 20:46:48.121983   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 20:46:48.130805   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1212 20:46:48.132012   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1212 20:46:48.132800   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1212 20:46:48.155575   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1212 20:46:48.178343   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1212 20:46:48.222912   38081 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1212 20:46:48.222962   38081 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1212 20:46:48.223011   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.242075   38081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:46:48.250007   38081 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1212 20:46:48.250060   38081 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 20:46:48.250111   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.308667   38081 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1212 20:46:48.308710   38081 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 20:46:48.308760   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.315847   38081 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1212 20:46:48.315896   38081 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1212 20:46:48.315902   38081 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 20:46:48.315922   38081 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1212 20:46:48.315950   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.315963   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.321482   38081 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1212 20:46:48.321518   38081 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 20:46:48.321564   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.333427   38081 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1212 20:46:48.333475   38081 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 20:46:48.333498   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1212 20:46:48.333517   38081 ssh_runner.go:195] Run: which crictl
	I1212 20:46:48.461986   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 20:46:48.462028   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1212 20:46:48.462055   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1212 20:46:48.462102   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1212 20:46:48.462138   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1212 20:46:48.462192   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1212 20:46:48.462259   38081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1212 20:46:48.799919   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1212 20:46:48.930742   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1212 20:46:48.930774   38081 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1212 20:46:48.930830   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1212 20:46:48.930900   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1212 20:46:48.931001   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1212 20:46:48.955675   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1212 20:46:48.955804   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 20:46:48.957890   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1212 20:46:48.957969   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1212 20:46:48.958059   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1212 20:46:48.958064   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1212 20:46:48.957982   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 20:46:48.958132   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 20:46:48.958134   38081 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1212 20:46:48.958244   38081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 20:46:50.308721   38081 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (1.377690043s)
	I1212 20:46:50.308760   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1212 20:46:50.308721   38081 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (1.377864383s)
	I1212 20:46:50.308780   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1212 20:46:50.308788   38081 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.352954929s)
	I1212 20:46:50.308821   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1212 20:46:50.308795   38081 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1212 20:46:50.308833   38081 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (1.350754471s)
	I1212 20:46:50.308850   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1212 20:46:50.308884   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1212 20:46:50.308919   38081 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (1.350836858s)
	I1212 20:46:50.308935   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1212 20:46:50.308964   38081 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (1.350809569s)
	I1212 20:46:50.308978   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1212 20:46:50.309025   38081 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.350750299s)
	I1212 20:46:50.309062   38081 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1212 20:46:50.656444   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1212 20:46:50.656484   38081 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 20:46:50.656522   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 20:46:51.404561   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1212 20:46:51.404601   38081 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1212 20:46:51.404651   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1212 20:46:53.657414   38081 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.252739229s)
	I1212 20:46:53.657448   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1212 20:46:53.657474   38081 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 20:46:53.657520   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 20:46:54.502287   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1212 20:46:54.502333   38081 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 20:46:54.502390   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 20:46:55.351922   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1212 20:46:55.351966   38081 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 20:46:55.352017   38081 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 20:46:55.794240   38081 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1212 20:46:55.794279   38081 cache_images.go:123] Successfully loaded all cached images
	I1212 20:46:55.794290   38081 cache_images.go:92] LoadImages completed in 7.858474165s
	I1212 20:46:55.794355   38081 ssh_runner.go:195] Run: crio config
	I1212 20:46:55.864740   38081 cni.go:84] Creating CNI manager for ""
	I1212 20:46:55.864767   38081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:46:55.864785   38081 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 20:46:55.864801   38081 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-824561 NodeName:test-preload-824561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:46:55.864929   38081 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-824561"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:46:55.864990   38081 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-824561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-824561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 20:46:55.865041   38081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1212 20:46:55.875131   38081 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 20:46:55.875212   38081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:46:55.884821   38081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 20:46:55.903397   38081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:46:55.922053   38081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1212 20:46:55.941337   38081 ssh_runner.go:195] Run: grep 192.168.39.111	control-plane.minikube.internal$ /etc/hosts
	I1212 20:46:55.945483   38081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:46:55.959869   38081 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561 for IP: 192.168.39.111
	I1212 20:46:55.959906   38081 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:46:55.960051   38081 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 20:46:55.960104   38081 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 20:46:55.960212   38081 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.key
	I1212 20:46:55.960311   38081 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/apiserver.key.f4aa454f
	I1212 20:46:55.960374   38081 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/proxy-client.key
	I1212 20:46:55.960503   38081 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 20:46:55.960551   38081 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 20:46:55.960566   38081 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 20:46:55.960607   38081 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:46:55.960647   38081 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:46:55.960684   38081 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 20:46:55.960751   38081 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:46:55.961379   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 20:46:55.988095   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:46:56.015444   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:46:56.039795   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:46:56.064245   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:46:56.088896   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:46:56.113734   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:46:56.138276   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:46:56.163040   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 20:46:56.187292   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 20:46:56.211640   38081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:46:56.235947   38081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:46:56.253246   38081 ssh_runner.go:195] Run: openssl version
	I1212 20:46:56.259057   38081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 20:46:56.269389   38081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 20:46:56.274557   38081 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 20:46:56.274636   38081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 20:46:56.280734   38081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 20:46:56.291141   38081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 20:46:56.301929   38081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 20:46:56.307050   38081 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 20:46:56.307116   38081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 20:46:56.313376   38081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 20:46:56.323688   38081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 20:46:56.334182   38081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:46:56.339212   38081 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:46:56.339298   38081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:46:56.345052   38081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 20:46:56.355613   38081 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 20:46:56.360576   38081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:46:56.366822   38081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:46:56.372791   38081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:46:56.379668   38081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:46:56.386393   38081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:46:56.393160   38081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:46:56.399614   38081 kubeadm.go:404] StartCluster: {Name:test-preload-824561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-824561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:46:56.399726   38081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:46:56.399787   38081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:46:56.454648   38081 cri.go:89] found id: ""
	I1212 20:46:56.454725   38081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:46:56.465149   38081 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 20:46:56.465171   38081 kubeadm.go:636] restartCluster start
	I1212 20:46:56.465234   38081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:46:56.474720   38081 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:56.475226   38081 kubeconfig.go:135] verify returned: extract IP: "test-preload-824561" does not appear in /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:46:56.475381   38081 kubeconfig.go:146] "test-preload-824561" context is missing from /home/jenkins/minikube-integration/17734-9188/kubeconfig - will repair!
	I1212 20:46:56.475753   38081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:46:56.476536   38081 kapi.go:59] client config for test-preload-824561: &rest.Config{Host:"https://192.168.39.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:46:56.477272   38081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:46:56.486569   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:56.486625   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:56.497850   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:56.497872   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:56.497917   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:56.508640   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:57.009493   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:57.009580   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:57.020413   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:57.508861   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:57.508931   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:57.520071   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:58.009230   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:58.009302   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:58.020330   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:58.509750   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:58.509848   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:58.521358   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:59.008942   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:59.009024   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:59.020008   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:46:59.509708   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:46:59.509796   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:46:59.520877   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:00.009495   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:00.009570   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:00.020865   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:00.509415   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:00.509528   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:00.520601   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:01.009594   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:01.009662   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:01.020816   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:01.509434   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:01.509505   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:01.520378   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:02.008922   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:02.009021   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:02.019897   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:02.509557   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:02.509654   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:02.520867   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:03.009524   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:03.009632   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:03.021498   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:03.509618   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:03.509687   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:03.521080   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:04.009711   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:04.009811   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:04.020700   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:04.509377   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:04.509471   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:04.520707   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:05.009199   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:05.009355   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:05.021203   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:05.509789   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:05.509910   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:05.520988   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:06.009006   38081 api_server.go:166] Checking apiserver status ...
	I1212 20:47:06.009086   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 20:47:06.020071   38081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:47:06.486984   38081 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 20:47:06.487047   38081 kubeadm.go:1135] stopping kube-system containers ...
	I1212 20:47:06.487058   38081 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:47:06.487116   38081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:47:06.529374   38081 cri.go:89] found id: ""
	I1212 20:47:06.529453   38081 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:47:06.544577   38081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:47:06.553242   38081 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:47:06.553300   38081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:47:06.561534   38081 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 20:47:06.561556   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:47:06.658673   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:47:07.330781   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:47:07.681856   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:47:07.762473   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:47:07.857121   38081 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:47:07.857188   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:07.892005   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:08.422021   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:08.921860   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:09.421543   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:09.922345   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:09.943599   38081 api_server.go:72] duration metric: took 2.086479779s to wait for apiserver process to appear ...
	I1212 20:47:09.943624   38081 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:47:09.943646   38081 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1212 20:47:14.631099   38081 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:47:14.631130   38081 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:47:14.631163   38081 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1212 20:47:14.650563   38081 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:47:14.650596   38081 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:47:15.151300   38081 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1212 20:47:15.163009   38081 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:47:15.163045   38081 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:47:15.651404   38081 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1212 20:47:15.660429   38081 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:47:15.660474   38081 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:47:16.151586   38081 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1212 20:47:16.168764   38081 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I1212 20:47:16.178700   38081 api_server.go:141] control plane version: v1.24.4
	I1212 20:47:16.178726   38081 api_server.go:131] duration metric: took 6.23509503s to wait for apiserver health ...
	I1212 20:47:16.178734   38081 cni.go:84] Creating CNI manager for ""
	I1212 20:47:16.178740   38081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:47:16.180779   38081 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 20:47:16.182406   38081 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 20:47:16.195870   38081 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 20:47:16.236178   38081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:47:16.257784   38081 system_pods.go:59] 8 kube-system pods found
	I1212 20:47:16.257822   38081 system_pods.go:61] "coredns-6d4b75cb6d-tnw5q" [28a7157e-bf2f-49c3-893c-aff886510769] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:47:16.257829   38081 system_pods.go:61] "coredns-6d4b75cb6d-xg7ls" [7251ded9-bb5d-4f85-8567-ac4475ed9d1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:47:16.257836   38081 system_pods.go:61] "etcd-test-preload-824561" [eeb15361-3219-4e68-a126-82564e30552f] Running
	I1212 20:47:16.257841   38081 system_pods.go:61] "kube-apiserver-test-preload-824561" [6614a75e-7a70-4f47-aac9-01f592e64b77] Running
	I1212 20:47:16.257854   38081 system_pods.go:61] "kube-controller-manager-test-preload-824561" [7c93bd16-7ca1-4066-b160-13d9c3be3e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:47:16.257860   38081 system_pods.go:61] "kube-proxy-pxpn6" [737dd1a0-476a-493f-bbbc-727d6e8abbc8] Running
	I1212 20:47:16.257867   38081 system_pods.go:61] "kube-scheduler-test-preload-824561" [d0a1a240-52a2-44be-9925-ea77c4a9c706] Running
	I1212 20:47:16.257877   38081 system_pods.go:61] "storage-provisioner" [9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:47:16.257894   38081 system_pods.go:74] duration metric: took 21.694599ms to wait for pod list to return data ...
	I1212 20:47:16.257904   38081 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:47:16.266230   38081 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:47:16.266280   38081 node_conditions.go:123] node cpu capacity is 2
	I1212 20:47:16.266321   38081 node_conditions.go:105] duration metric: took 8.412245ms to run NodePressure ...
	I1212 20:47:16.266352   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:47:16.564861   38081 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 20:47:16.572055   38081 kubeadm.go:787] kubelet initialised
	I1212 20:47:16.572082   38081 kubeadm.go:788] duration metric: took 7.193457ms waiting for restarted kubelet to initialise ...
	I1212 20:47:16.572088   38081 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:47:16.578385   38081 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-tnw5q" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:16.588478   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "coredns-6d4b75cb6d-tnw5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.588511   38081 pod_ready.go:81] duration metric: took 10.100289ms waiting for pod "coredns-6d4b75cb6d-tnw5q" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:16.588522   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "coredns-6d4b75cb6d-tnw5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.588540   38081 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:16.599346   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.599369   38081 pod_ready.go:81] duration metric: took 10.812987ms waiting for pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:16.599377   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.599385   38081 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:16.607133   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "etcd-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.607180   38081 pod_ready.go:81] duration metric: took 7.78353ms waiting for pod "etcd-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:16.607192   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "etcd-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.607209   38081 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:16.642522   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "kube-apiserver-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.642547   38081 pod_ready.go:81] duration metric: took 35.327571ms waiting for pod "kube-apiserver-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:16.642555   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "kube-apiserver-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:16.642560   38081 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:17.040502   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:17.040535   38081 pod_ready.go:81] duration metric: took 397.960673ms waiting for pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:17.040547   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:17.040556   38081 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pxpn6" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:17.441004   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "kube-proxy-pxpn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:17.441035   38081 pod_ready.go:81] duration metric: took 400.469792ms waiting for pod "kube-proxy-pxpn6" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:17.441045   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "kube-proxy-pxpn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:17.441052   38081 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:17.840269   38081 pod_ready.go:97] node "test-preload-824561" hosting pod "kube-scheduler-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:17.840300   38081 pod_ready.go:81] duration metric: took 399.240947ms waiting for pod "kube-scheduler-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	E1212 20:47:17.840313   38081 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-824561" hosting pod "kube-scheduler-test-preload-824561" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:17.840323   38081 pod_ready.go:38] duration metric: took 1.268226734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:47:17.840343   38081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:47:17.853192   38081 ops.go:34] apiserver oom_adj: -16
	I1212 20:47:17.853218   38081 kubeadm.go:640] restartCluster took 21.388039114s
	I1212 20:47:17.853228   38081 kubeadm.go:406] StartCluster complete in 21.453622566s
	I1212 20:47:17.853248   38081 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:47:17.853329   38081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:47:17.854002   38081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:47:17.854236   38081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:47:17.854351   38081 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 20:47:17.854431   38081 addons.go:69] Setting storage-provisioner=true in profile "test-preload-824561"
	I1212 20:47:17.854448   38081 addons.go:231] Setting addon storage-provisioner=true in "test-preload-824561"
	W1212 20:47:17.854465   38081 addons.go:240] addon storage-provisioner should already be in state true
	I1212 20:47:17.854468   38081 config.go:182] Loaded profile config "test-preload-824561": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 20:47:17.854483   38081 addons.go:69] Setting default-storageclass=true in profile "test-preload-824561"
	I1212 20:47:17.854510   38081 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-824561"
	I1212 20:47:17.854513   38081 host.go:66] Checking if "test-preload-824561" exists ...
	I1212 20:47:17.854858   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:47:17.854898   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:47:17.854926   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:47:17.854956   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:47:17.854835   38081 kapi.go:59] client config for test-preload-824561: &rest.Config{Host:"https://192.168.39.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:47:17.858753   38081 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-824561" context rescaled to 1 replicas
	I1212 20:47:17.858800   38081 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:47:17.861528   38081 out.go:177] * Verifying Kubernetes components...
	I1212 20:47:17.863287   38081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:47:17.870232   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I1212 20:47:17.870244   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I1212 20:47:17.870694   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:47:17.870722   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:47:17.871175   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:47:17.871194   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:47:17.871308   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:47:17.871331   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:47:17.871592   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:47:17.871626   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:47:17.871778   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetState
	I1212 20:47:17.872179   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:47:17.872221   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:47:17.874332   38081 kapi.go:59] client config for test-preload-824561: &rest.Config{Host:"https://192.168.39.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.crt", KeyFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/profiles/test-preload-824561/client.key", CAFile:"/home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:47:17.874654   38081 addons.go:231] Setting addon default-storageclass=true in "test-preload-824561"
	W1212 20:47:17.874676   38081 addons.go:240] addon default-storageclass should already be in state true
	I1212 20:47:17.874715   38081 host.go:66] Checking if "test-preload-824561" exists ...
	I1212 20:47:17.875147   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:47:17.875197   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:47:17.888897   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1212 20:47:17.889437   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:47:17.889734   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I1212 20:47:17.890002   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:47:17.890027   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:47:17.890097   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:47:17.890319   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:47:17.890594   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:47:17.890615   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:47:17.890626   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetState
	I1212 20:47:17.890923   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:47:17.891543   38081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:47:17.891596   38081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:47:17.892338   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:47:17.894597   38081 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:47:17.896224   38081 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:47:17.896241   38081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:47:17.896258   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:47:17.899852   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:47:17.900324   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:47:17.900357   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:47:17.900639   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:47:17.900854   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:47:17.901073   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:47:17.901335   38081 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa Username:docker}
	I1212 20:47:17.908022   38081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I1212 20:47:17.908539   38081 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:47:17.909043   38081 main.go:141] libmachine: Using API Version  1
	I1212 20:47:17.909071   38081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:47:17.909419   38081 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:47:17.909608   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetState
	I1212 20:47:17.911269   38081 main.go:141] libmachine: (test-preload-824561) Calling .DriverName
	I1212 20:47:17.911536   38081 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:47:17.911564   38081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:47:17.911582   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHHostname
	I1212 20:47:17.914397   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:47:17.914795   38081 main.go:141] libmachine: (test-preload-824561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:76:bf", ip: ""} in network mk-test-preload-824561: {Iface:virbr1 ExpiryTime:2023-12-12 21:46:33 +0000 UTC Type:0 Mac:52:54:00:fd:76:bf Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-824561 Clientid:01:52:54:00:fd:76:bf}
	I1212 20:47:17.914836   38081 main.go:141] libmachine: (test-preload-824561) DBG | domain test-preload-824561 has defined IP address 192.168.39.111 and MAC address 52:54:00:fd:76:bf in network mk-test-preload-824561
	I1212 20:47:17.914999   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHPort
	I1212 20:47:17.915210   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHKeyPath
	I1212 20:47:17.915400   38081 main.go:141] libmachine: (test-preload-824561) Calling .GetSSHUsername
	I1212 20:47:17.915592   38081 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/test-preload-824561/id_rsa Username:docker}
	I1212 20:47:18.048162   38081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:47:18.063062   38081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:47:18.084045   38081 node_ready.go:35] waiting up to 6m0s for node "test-preload-824561" to be "Ready" ...
	I1212 20:47:18.084111   38081 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 20:47:19.005976   38081 main.go:141] libmachine: Making call to close driver server
	I1212 20:47:19.006008   38081 main.go:141] libmachine: (test-preload-824561) Calling .Close
	I1212 20:47:19.006073   38081 main.go:141] libmachine: Making call to close driver server
	I1212 20:47:19.006096   38081 main.go:141] libmachine: (test-preload-824561) Calling .Close
	I1212 20:47:19.006328   38081 main.go:141] libmachine: (test-preload-824561) DBG | Closing plugin on server side
	I1212 20:47:19.006361   38081 main.go:141] libmachine: (test-preload-824561) DBG | Closing plugin on server side
	I1212 20:47:19.006389   38081 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:47:19.006397   38081 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:47:19.006404   38081 main.go:141] libmachine: Making call to close driver server
	I1212 20:47:19.006409   38081 main.go:141] libmachine: (test-preload-824561) Calling .Close
	I1212 20:47:19.006428   38081 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:47:19.006448   38081 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:47:19.006463   38081 main.go:141] libmachine: Making call to close driver server
	I1212 20:47:19.006471   38081 main.go:141] libmachine: (test-preload-824561) Calling .Close
	I1212 20:47:19.006646   38081 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:47:19.006661   38081 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:47:19.006825   38081 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:47:19.006843   38081 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:47:19.006841   38081 main.go:141] libmachine: (test-preload-824561) DBG | Closing plugin on server side
	I1212 20:47:19.015272   38081 main.go:141] libmachine: Making call to close driver server
	I1212 20:47:19.015291   38081 main.go:141] libmachine: (test-preload-824561) Calling .Close
	I1212 20:47:19.015516   38081 main.go:141] libmachine: Successfully made call to close driver server
	I1212 20:47:19.015533   38081 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 20:47:19.015543   38081 main.go:141] libmachine: (test-preload-824561) DBG | Closing plugin on server side
	I1212 20:47:19.017410   38081 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:47:19.019063   38081 addons.go:502] enable addons completed in 1.164721625s: enabled=[storage-provisioner default-storageclass]
	I1212 20:47:20.245821   38081 node_ready.go:58] node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:22.246049   38081 node_ready.go:58] node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:24.246082   38081 node_ready.go:58] node "test-preload-824561" has status "Ready":"False"
	I1212 20:47:25.245761   38081 node_ready.go:49] node "test-preload-824561" has status "Ready":"True"
	I1212 20:47:25.245781   38081 node_ready.go:38] duration metric: took 7.161700825s waiting for node "test-preload-824561" to be "Ready" ...
	I1212 20:47:25.245791   38081 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:47:25.252211   38081 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:25.257874   38081 pod_ready.go:92] pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace has status "Ready":"True"
	I1212 20:47:25.257898   38081 pod_ready.go:81] duration metric: took 5.665032ms waiting for pod "coredns-6d4b75cb6d-xg7ls" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:25.257907   38081 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:25.262502   38081 pod_ready.go:92] pod "etcd-test-preload-824561" in "kube-system" namespace has status "Ready":"True"
	I1212 20:47:25.262521   38081 pod_ready.go:81] duration metric: took 4.607536ms waiting for pod "etcd-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:25.262532   38081 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:26.783490   38081 pod_ready.go:92] pod "kube-apiserver-test-preload-824561" in "kube-system" namespace has status "Ready":"True"
	I1212 20:47:26.783513   38081 pod_ready.go:81] duration metric: took 1.520974115s waiting for pod "kube-apiserver-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:26.783522   38081 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:26.846403   38081 pod_ready.go:92] pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace has status "Ready":"True"
	I1212 20:47:26.846424   38081 pod_ready.go:81] duration metric: took 62.896294ms waiting for pod "kube-controller-manager-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:26.846434   38081 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxpn6" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:27.246657   38081 pod_ready.go:92] pod "kube-proxy-pxpn6" in "kube-system" namespace has status "Ready":"True"
	I1212 20:47:27.246679   38081 pod_ready.go:81] duration metric: took 400.239673ms waiting for pod "kube-proxy-pxpn6" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:27.246694   38081 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:29.553564   38081 pod_ready.go:92] pod "kube-scheduler-test-preload-824561" in "kube-system" namespace has status "Ready":"True"
	I1212 20:47:29.553585   38081 pod_ready.go:81] duration metric: took 2.306884856s waiting for pod "kube-scheduler-test-preload-824561" in "kube-system" namespace to be "Ready" ...
	I1212 20:47:29.553594   38081 pod_ready.go:38] duration metric: took 4.307795715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 20:47:29.553607   38081 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:47:29.553660   38081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:47:29.569803   38081 api_server.go:72] duration metric: took 11.710951704s to wait for apiserver process to appear ...
	I1212 20:47:29.569828   38081 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:47:29.569850   38081 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1212 20:47:29.575077   38081 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I1212 20:47:29.575909   38081 api_server.go:141] control plane version: v1.24.4
	I1212 20:47:29.575926   38081 api_server.go:131] duration metric: took 6.093045ms to wait for apiserver health ...
	I1212 20:47:29.575933   38081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:47:29.580968   38081 system_pods.go:59] 7 kube-system pods found
	I1212 20:47:29.580995   38081 system_pods.go:61] "coredns-6d4b75cb6d-xg7ls" [7251ded9-bb5d-4f85-8567-ac4475ed9d1e] Running
	I1212 20:47:29.581009   38081 system_pods.go:61] "etcd-test-preload-824561" [eeb15361-3219-4e68-a126-82564e30552f] Running
	I1212 20:47:29.581016   38081 system_pods.go:61] "kube-apiserver-test-preload-824561" [6614a75e-7a70-4f47-aac9-01f592e64b77] Running
	I1212 20:47:29.581023   38081 system_pods.go:61] "kube-controller-manager-test-preload-824561" [7c93bd16-7ca1-4066-b160-13d9c3be3e06] Running
	I1212 20:47:29.581033   38081 system_pods.go:61] "kube-proxy-pxpn6" [737dd1a0-476a-493f-bbbc-727d6e8abbc8] Running
	I1212 20:47:29.581046   38081 system_pods.go:61] "kube-scheduler-test-preload-824561" [d0a1a240-52a2-44be-9925-ea77c4a9c706] Running
	I1212 20:47:29.581052   38081 system_pods.go:61] "storage-provisioner" [9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc] Running
	I1212 20:47:29.581059   38081 system_pods.go:74] duration metric: took 5.12021ms to wait for pod list to return data ...
	I1212 20:47:29.581071   38081 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:47:29.646092   38081 default_sa.go:45] found service account: "default"
	I1212 20:47:29.646122   38081 default_sa.go:55] duration metric: took 65.041323ms for default service account to be created ...
	I1212 20:47:29.646133   38081 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:47:29.849586   38081 system_pods.go:86] 7 kube-system pods found
	I1212 20:47:29.849616   38081 system_pods.go:89] "coredns-6d4b75cb6d-xg7ls" [7251ded9-bb5d-4f85-8567-ac4475ed9d1e] Running
	I1212 20:47:29.849624   38081 system_pods.go:89] "etcd-test-preload-824561" [eeb15361-3219-4e68-a126-82564e30552f] Running
	I1212 20:47:29.849631   38081 system_pods.go:89] "kube-apiserver-test-preload-824561" [6614a75e-7a70-4f47-aac9-01f592e64b77] Running
	I1212 20:47:29.849637   38081 system_pods.go:89] "kube-controller-manager-test-preload-824561" [7c93bd16-7ca1-4066-b160-13d9c3be3e06] Running
	I1212 20:47:29.849640   38081 system_pods.go:89] "kube-proxy-pxpn6" [737dd1a0-476a-493f-bbbc-727d6e8abbc8] Running
	I1212 20:47:29.849645   38081 system_pods.go:89] "kube-scheduler-test-preload-824561" [d0a1a240-52a2-44be-9925-ea77c4a9c706] Running
	I1212 20:47:29.849651   38081 system_pods.go:89] "storage-provisioner" [9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc] Running
	I1212 20:47:29.849661   38081 system_pods.go:126] duration metric: took 203.521614ms to wait for k8s-apps to be running ...
	I1212 20:47:29.849674   38081 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:47:29.849721   38081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:47:29.862371   38081 system_svc.go:56] duration metric: took 12.688798ms WaitForService to wait for kubelet.
	I1212 20:47:29.862404   38081 kubeadm.go:581] duration metric: took 12.003571129s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 20:47:29.862426   38081 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:47:30.046450   38081 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 20:47:30.046485   38081 node_conditions.go:123] node cpu capacity is 2
	I1212 20:47:30.046496   38081 node_conditions.go:105] duration metric: took 184.063584ms to run NodePressure ...
	I1212 20:47:30.046509   38081 start.go:228] waiting for startup goroutines ...
	I1212 20:47:30.046516   38081 start.go:233] waiting for cluster config update ...
	I1212 20:47:30.046527   38081 start.go:242] writing updated cluster config ...
	I1212 20:47:30.046877   38081 ssh_runner.go:195] Run: rm -f paused
	I1212 20:47:30.092028   38081 start.go:600] kubectl: 1.28.4, cluster: 1.24.4 (minor skew: 4)
	I1212 20:47:30.094026   38081 out.go:177] 
	W1212 20:47:30.095567   38081 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.24.4.
	I1212 20:47:30.097023   38081 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1212 20:47:30.098403   38081 out.go:177] * Done! kubectl is now configured to use "test-preload-824561" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 20:46:32 UTC, ends at Tue 2023-12-12 20:47:31 UTC. --
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.047283329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702414051047267677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=3e1c2cc4-c7b2-496c-9ff1-922585fa9783 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.047941689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb29ab11-8dda-4397-9508-343d6911e140 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.048001522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb29ab11-8dda-4397-9508-343d6911e140 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.048225354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a79859b241239ef36864fdd178491a8ee812d0f8275cdd3826264d8645a04372,PodSandboxId:860ece1bab156cc12ffc2d1d53bcbfc378eba73e30ee9e9eb8c1396281fe67e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702414043748465014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xg7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7251ded9-bb5d-4f85-8567-ac4475ed9d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 776f4de9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1f7e9a577fc1d253cb7f1d60c3c119afe3c10f0793b002831ce68899da8fe0,PodSandboxId:27e78645da76f4f30dc34212f2eca004b332d5c9719e0d147d0eccbb6a4bbb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702414036688116529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 4420ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9068122ac6ca868f79b36ddee0a56120cbe5f5926963558a7fc3c3a48be019,PodSandboxId:6082d3f2923669ab65314be68b40f8b62e4b4e44a4dce36b23c5271556812aed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702414036198300733,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
737dd1a0-476a-493f-bbbc-727d6e8abbc8,},Annotations:map[string]string{io.kubernetes.container.hash: ce8d4231,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c631be5bcc2627c83dcc1fe76b710f3ba5ea42cd99abb1327debd0b7fb4e54ec,PodSandboxId:8997c96f50b1dc853b202ac9c6720837ed273501a2b998f1ae85a7c0679de6d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702414029428132474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54abb0da5d6a6671085882c398b0f2cd,},Annotations:map
[string]string{io.kubernetes.container.hash: 3a5d8205,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0701630c6a216ed82d01ccbfbe3160d70470987f1deb1127abe9d93317e48b9a,PodSandboxId:0f608732f88bc4c0805eb600286213056859587deb4c0dc561bba5d6ee1ef17d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702414029159448566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce9785331010f1e80155a8964fd8d6b5,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faecb87750890578bb524414e56ed7875dcacd4649ae3495b669209e0a36730,PodSandboxId:8963aacb0a1bfb5d02a4c957f0aa9b6ed5c27d3a73818373b5c4798aae560da7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702414029105415457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee415d027ed4671b2676f60894c7ae6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b753a3df34f257297a9eea9db1dbff8c0af7faa08b10f8ddd3b6bd2503ef9d1f,PodSandboxId:b37ff135cb9e8644b1d735c197da80559d5f511a2a6ac187b965fc3a5b91f071,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702414028824193099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c37bb33b909376a12737567c936cec,},Annotations:map[strin
g]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb29ab11-8dda-4397-9508-343d6911e140 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.087399453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f3a867d-9f25-4c3a-a75c-919b0f42700c name=/runtime.v1.RuntimeService/Version
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.087480936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f3a867d-9f25-4c3a-a75c-919b0f42700c name=/runtime.v1.RuntimeService/Version
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.088619141Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=10bbf8d4-d13f-4366-9152-8878c456a01f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.089154437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702414051089139033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=10bbf8d4-d13f-4366-9152-8878c456a01f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.089723857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=826ea765-ff6e-438c-9840-22970a83e1e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.089804043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=826ea765-ff6e-438c-9840-22970a83e1e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.089995207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a79859b241239ef36864fdd178491a8ee812d0f8275cdd3826264d8645a04372,PodSandboxId:860ece1bab156cc12ffc2d1d53bcbfc378eba73e30ee9e9eb8c1396281fe67e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702414043748465014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xg7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7251ded9-bb5d-4f85-8567-ac4475ed9d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 776f4de9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1f7e9a577fc1d253cb7f1d60c3c119afe3c10f0793b002831ce68899da8fe0,PodSandboxId:27e78645da76f4f30dc34212f2eca004b332d5c9719e0d147d0eccbb6a4bbb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702414036688116529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 4420ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9068122ac6ca868f79b36ddee0a56120cbe5f5926963558a7fc3c3a48be019,PodSandboxId:6082d3f2923669ab65314be68b40f8b62e4b4e44a4dce36b23c5271556812aed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702414036198300733,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
737dd1a0-476a-493f-bbbc-727d6e8abbc8,},Annotations:map[string]string{io.kubernetes.container.hash: ce8d4231,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c631be5bcc2627c83dcc1fe76b710f3ba5ea42cd99abb1327debd0b7fb4e54ec,PodSandboxId:8997c96f50b1dc853b202ac9c6720837ed273501a2b998f1ae85a7c0679de6d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702414029428132474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54abb0da5d6a6671085882c398b0f2cd,},Annotations:map
[string]string{io.kubernetes.container.hash: 3a5d8205,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0701630c6a216ed82d01ccbfbe3160d70470987f1deb1127abe9d93317e48b9a,PodSandboxId:0f608732f88bc4c0805eb600286213056859587deb4c0dc561bba5d6ee1ef17d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702414029159448566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce9785331010f1e80155a8964fd8d6b5,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faecb87750890578bb524414e56ed7875dcacd4649ae3495b669209e0a36730,PodSandboxId:8963aacb0a1bfb5d02a4c957f0aa9b6ed5c27d3a73818373b5c4798aae560da7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702414029105415457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee415d027ed4671b2676f60894c7ae6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b753a3df34f257297a9eea9db1dbff8c0af7faa08b10f8ddd3b6bd2503ef9d1f,PodSandboxId:b37ff135cb9e8644b1d735c197da80559d5f511a2a6ac187b965fc3a5b91f071,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702414028824193099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c37bb33b909376a12737567c936cec,},Annotations:map[strin
g]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=826ea765-ff6e-438c-9840-22970a83e1e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.130653692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=86cca5e3-71d6-4c9d-bd81-308025bd16b2 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.130740320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=86cca5e3-71d6-4c9d-bd81-308025bd16b2 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.132094869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=189b371e-a8cb-434d-ae99-43339784af2d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.132530004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702414051132517129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=189b371e-a8cb-434d-ae99-43339784af2d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.133186424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ed3ff49-31a4-4d93-a6d6-cf684595043d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.133264951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ed3ff49-31a4-4d93-a6d6-cf684595043d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.133475225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a79859b241239ef36864fdd178491a8ee812d0f8275cdd3826264d8645a04372,PodSandboxId:860ece1bab156cc12ffc2d1d53bcbfc378eba73e30ee9e9eb8c1396281fe67e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702414043748465014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xg7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7251ded9-bb5d-4f85-8567-ac4475ed9d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 776f4de9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1f7e9a577fc1d253cb7f1d60c3c119afe3c10f0793b002831ce68899da8fe0,PodSandboxId:27e78645da76f4f30dc34212f2eca004b332d5c9719e0d147d0eccbb6a4bbb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702414036688116529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 4420ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9068122ac6ca868f79b36ddee0a56120cbe5f5926963558a7fc3c3a48be019,PodSandboxId:6082d3f2923669ab65314be68b40f8b62e4b4e44a4dce36b23c5271556812aed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702414036198300733,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
737dd1a0-476a-493f-bbbc-727d6e8abbc8,},Annotations:map[string]string{io.kubernetes.container.hash: ce8d4231,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c631be5bcc2627c83dcc1fe76b710f3ba5ea42cd99abb1327debd0b7fb4e54ec,PodSandboxId:8997c96f50b1dc853b202ac9c6720837ed273501a2b998f1ae85a7c0679de6d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702414029428132474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54abb0da5d6a6671085882c398b0f2cd,},Annotations:map
[string]string{io.kubernetes.container.hash: 3a5d8205,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0701630c6a216ed82d01ccbfbe3160d70470987f1deb1127abe9d93317e48b9a,PodSandboxId:0f608732f88bc4c0805eb600286213056859587deb4c0dc561bba5d6ee1ef17d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702414029159448566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce9785331010f1e80155a8964fd8d6b5,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faecb87750890578bb524414e56ed7875dcacd4649ae3495b669209e0a36730,PodSandboxId:8963aacb0a1bfb5d02a4c957f0aa9b6ed5c27d3a73818373b5c4798aae560da7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702414029105415457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee415d027ed4671b2676f60894c7ae6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b753a3df34f257297a9eea9db1dbff8c0af7faa08b10f8ddd3b6bd2503ef9d1f,PodSandboxId:b37ff135cb9e8644b1d735c197da80559d5f511a2a6ac187b965fc3a5b91f071,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702414028824193099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c37bb33b909376a12737567c936cec,},Annotations:map[strin
g]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ed3ff49-31a4-4d93-a6d6-cf684595043d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.166440920Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a24eaee0-5271-4c16-ad4a-56cdbd28fce3 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.166531428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a24eaee0-5271-4c16-ad4a-56cdbd28fce3 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.167678194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d75a46de-d8e7-45f3-a60e-f98707c9f0f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.168230612Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702414051168215188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=d75a46de-d8e7-45f3-a60e-f98707c9f0f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.168749649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f5f8328-d6c3-4e8c-9250-77590aa84b2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.168800530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f5f8328-d6c3-4e8c-9250-77590aa84b2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:47:31 test-preload-824561 crio[713]: time="2023-12-12 20:47:31.168970540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a79859b241239ef36864fdd178491a8ee812d0f8275cdd3826264d8645a04372,PodSandboxId:860ece1bab156cc12ffc2d1d53bcbfc378eba73e30ee9e9eb8c1396281fe67e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702414043748465014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xg7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7251ded9-bb5d-4f85-8567-ac4475ed9d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 776f4de9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f1f7e9a577fc1d253cb7f1d60c3c119afe3c10f0793b002831ce68899da8fe0,PodSandboxId:27e78645da76f4f30dc34212f2eca004b332d5c9719e0d147d0eccbb6a4bbb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702414036688116529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc,},Annotations:map[string]string{io.kubernetes.container.hash: 4420ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9068122ac6ca868f79b36ddee0a56120cbe5f5926963558a7fc3c3a48be019,PodSandboxId:6082d3f2923669ab65314be68b40f8b62e4b4e44a4dce36b23c5271556812aed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702414036198300733,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
737dd1a0-476a-493f-bbbc-727d6e8abbc8,},Annotations:map[string]string{io.kubernetes.container.hash: ce8d4231,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c631be5bcc2627c83dcc1fe76b710f3ba5ea42cd99abb1327debd0b7fb4e54ec,PodSandboxId:8997c96f50b1dc853b202ac9c6720837ed273501a2b998f1ae85a7c0679de6d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702414029428132474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54abb0da5d6a6671085882c398b0f2cd,},Annotations:map
[string]string{io.kubernetes.container.hash: 3a5d8205,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0701630c6a216ed82d01ccbfbe3160d70470987f1deb1127abe9d93317e48b9a,PodSandboxId:0f608732f88bc4c0805eb600286213056859587deb4c0dc561bba5d6ee1ef17d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702414029159448566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce9785331010f1e80155a8964fd8d6b5,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faecb87750890578bb524414e56ed7875dcacd4649ae3495b669209e0a36730,PodSandboxId:8963aacb0a1bfb5d02a4c957f0aa9b6ed5c27d3a73818373b5c4798aae560da7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702414029105415457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee415d027ed4671b2676f60894c7ae6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b753a3df34f257297a9eea9db1dbff8c0af7faa08b10f8ddd3b6bd2503ef9d1f,PodSandboxId:b37ff135cb9e8644b1d735c197da80559d5f511a2a6ac187b965fc3a5b91f071,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702414028824193099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-824561,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c37bb33b909376a12737567c936cec,},Annotations:map[strin
g]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f5f8328-d6c3-4e8c-9250-77590aa84b2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a79859b241239       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   860ece1bab156       coredns-6d4b75cb6d-xg7ls
	5f1f7e9a577fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   27e78645da76f       storage-provisioner
	bc9068122ac6c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   6082d3f292366       kube-proxy-pxpn6
	c631be5bcc262       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   8997c96f50b1d       etcd-test-preload-824561
	0701630c6a216       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   0f608732f88bc       kube-scheduler-test-preload-824561
	3faecb8775089       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   8963aacb0a1bf       kube-controller-manager-test-preload-824561
	b753a3df34f25       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   b37ff135cb9e8       kube-apiserver-test-preload-824561
	
	
	==> coredns [a79859b241239ef36864fdd178491a8ee812d0f8275cdd3826264d8645a04372] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:49406 - 35182 "HINFO IN 7318839953517196122.5675569479519293152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01096937s
	
	
	==> describe nodes <==
	Name:               test-preload-824561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-824561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=test-preload-824561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T20_45_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:45:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-824561
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:47:25 +0000   Tue, 12 Dec 2023 20:45:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:47:25 +0000   Tue, 12 Dec 2023 20:45:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:47:25 +0000   Tue, 12 Dec 2023 20:45:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:47:25 +0000   Tue, 12 Dec 2023 20:47:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    test-preload-824561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0ab3fcb58a641fabca10a58155cc7bf
	  System UUID:                d0ab3fcb-58a6-41fa-bca1-0a58155cc7bf
	  Boot ID:                    edbc3f8d-11bf-45d8-a3bd-0589b90f2d34
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-xg7ls                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-test-preload-824561                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-test-preload-824561             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-test-preload-824561    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-pxpn6                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-test-preload-824561             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x6 over 113s)  kubelet          Node test-preload-824561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet          Node test-preload-824561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x5 over 113s)  kubelet          Node test-preload-824561 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node test-preload-824561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node test-preload-824561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node test-preload-824561 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                92s                  kubelet          Node test-preload-824561 status is now: NodeReady
	  Normal  RegisteredNode           91s                  node-controller  Node test-preload-824561 event: Registered Node test-preload-824561 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)    kubelet          Node test-preload-824561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)    kubelet          Node test-preload-824561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 24s)    kubelet          Node test-preload-824561 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-824561 event: Registered Node test-preload-824561 in Controller
	
	
	==> dmesg <==
	[Dec12 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067154] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.390322] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.540304] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138480] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.504257] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.957479] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.099028] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.140546] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.097876] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.214772] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Dec12 20:47] systemd-fstab-generator[1097]: Ignoring "noauto" for root device
	[  +9.239915] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.328860] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [c631be5bcc2627c83dcc1fe76b710f3ba5ea42cd99abb1327debd0b7fb4e54ec] <==
	{"level":"info","ts":"2023-12-12T20:47:11.181Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6ca692280bc5404a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-12-12T20:47:11.186Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-12-12T20:47:11.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a switched to configuration voters=(7829105702924009546)"}
	{"level":"info","ts":"2023-12-12T20:47:11.186Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38179560bbe6e25a","local-member-id":"6ca692280bc5404a","added-peer-id":"6ca692280bc5404a","added-peer-peer-urls":["https://192.168.39.111:2380"]}
	{"level":"info","ts":"2023-12-12T20:47:11.186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"38179560bbe6e25a","local-member-id":"6ca692280bc5404a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:47:11.186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:47:11.190Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T20:47:11.193Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6ca692280bc5404a","initial-advertise-peer-urls":["https://192.168.39.111:2380"],"listen-peer-urls":["https://192.168.39.111:2380"],"advertise-client-urls":["https://192.168.39.111:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.111:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T20:47:11.192Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.111:2380"}
	{"level":"info","ts":"2023-12-12T20:47:11.194Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T20:47:11.197Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.111:2380"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a received MsgPreVoteResp from 6ca692280bc5404a at term 2"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a received MsgVoteResp from 6ca692280bc5404a at term 3"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a became leader at term 3"}
	{"level":"info","ts":"2023-12-12T20:47:12.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6ca692280bc5404a elected leader 6ca692280bc5404a at term 3"}
	{"level":"info","ts":"2023-12-12T20:47:12.128Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6ca692280bc5404a","local-member-attributes":"{Name:test-preload-824561 ClientURLs:[https://192.168.39.111:2379]}","request-path":"/0/members/6ca692280bc5404a/attributes","cluster-id":"38179560bbe6e25a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:47:12.128Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:47:12.129Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:47:12.129Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T20:47:12.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:47:12.130Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:47:12.131Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.111:2379"}
	
	
	==> kernel <==
	 20:47:31 up 1 min,  0 users,  load average: 2.12, 0.60, 0.21
	Linux test-preload-824561 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b753a3df34f257297a9eea9db1dbff8c0af7faa08b10f8ddd3b6bd2503ef9d1f] <==
	I1212 20:47:14.615882       1 controller.go:85] Starting OpenAPI V3 controller
	I1212 20:47:14.615919       1 naming_controller.go:291] Starting NamingConditionController
	I1212 20:47:14.616154       1 establishing_controller.go:76] Starting EstablishingController
	I1212 20:47:14.616201       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1212 20:47:14.616234       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1212 20:47:14.616264       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 20:47:14.676297       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1212 20:47:14.676802       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:47:14.680444       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1212 20:47:14.680495       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:47:14.680837       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1212 20:47:14.684166       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1212 20:47:14.690679       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E1212 20:47:14.698164       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1212 20:47:14.765725       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:47:15.234342       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 20:47:15.566216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:47:16.448497       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1212 20:47:16.465000       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1212 20:47:16.508479       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1212 20:47:16.531660       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:47:16.537693       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:47:16.841496       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1212 20:47:27.531482       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:47:27.535726       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3faecb87750890578bb524414e56ed7875dcacd4649ae3495b669209e0a36730] <==
	I1212 20:47:27.541584       1 shared_informer.go:262] Caches are synced for ephemeral
	I1212 20:47:27.543764       1 shared_informer.go:262] Caches are synced for deployment
	I1212 20:47:27.549905       1 shared_informer.go:262] Caches are synced for service account
	I1212 20:47:27.549960       1 shared_informer.go:262] Caches are synced for PVC protection
	I1212 20:47:27.550300       1 shared_informer.go:262] Caches are synced for GC
	I1212 20:47:27.552416       1 shared_informer.go:262] Caches are synced for persistent volume
	I1212 20:47:27.554569       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1212 20:47:27.558356       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1212 20:47:27.564582       1 shared_informer.go:262] Caches are synced for stateful set
	I1212 20:47:27.566337       1 shared_informer.go:262] Caches are synced for cronjob
	I1212 20:47:27.567753       1 shared_informer.go:262] Caches are synced for disruption
	I1212 20:47:27.567791       1 disruption.go:371] Sending events to api server.
	I1212 20:47:27.600949       1 shared_informer.go:262] Caches are synced for crt configmap
	I1212 20:47:27.650545       1 shared_informer.go:262] Caches are synced for taint
	I1212 20:47:27.650843       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1212 20:47:27.650944       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1212 20:47:27.651346       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-824561. Assuming now as a timestamp.
	I1212 20:47:27.651484       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1212 20:47:27.651561       1 event.go:294] "Event occurred" object="test-preload-824561" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-824561 event: Registered Node test-preload-824561 in Controller"
	I1212 20:47:27.667878       1 shared_informer.go:262] Caches are synced for resource quota
	I1212 20:47:27.669180       1 shared_informer.go:262] Caches are synced for resource quota
	I1212 20:47:27.734558       1 shared_informer.go:262] Caches are synced for attach detach
	I1212 20:47:28.160156       1 shared_informer.go:262] Caches are synced for garbage collector
	I1212 20:47:28.160206       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 20:47:28.206925       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [bc9068122ac6ca868f79b36ddee0a56120cbe5f5926963558a7fc3c3a48be019] <==
	I1212 20:47:16.705745       1 node.go:163] Successfully retrieved node IP: 192.168.39.111
	I1212 20:47:16.706143       1 server_others.go:138] "Detected node IP" address="192.168.39.111"
	I1212 20:47:16.706291       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1212 20:47:16.832383       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1212 20:47:16.832422       1 server_others.go:206] "Using iptables Proxier"
	I1212 20:47:16.832447       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1212 20:47:16.832650       1 server.go:661] "Version info" version="v1.24.4"
	I1212 20:47:16.832658       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:47:16.834554       1 config.go:317] "Starting service config controller"
	I1212 20:47:16.834595       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1212 20:47:16.834616       1 config.go:226] "Starting endpoint slice config controller"
	I1212 20:47:16.834620       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1212 20:47:16.835161       1 config.go:444] "Starting node config controller"
	I1212 20:47:16.835170       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1212 20:47:16.936340       1 shared_informer.go:262] Caches are synced for node config
	I1212 20:47:16.936420       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1212 20:47:16.936436       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [0701630c6a216ed82d01ccbfbe3160d70470987f1deb1127abe9d93317e48b9a] <==
	I1212 20:47:11.544876       1 serving.go:348] Generated self-signed cert in-memory
	W1212 20:47:14.634482       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:47:14.634612       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:47:14.634625       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:47:14.634633       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:47:14.699934       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1212 20:47:14.699988       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:47:14.704904       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1212 20:47:14.705210       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:47:14.705258       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:47:14.705298       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 20:47:14.806130       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:46:32 UTC, ends at Tue 2023-12-12 20:47:31 UTC. --
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862739    1103 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4tq\" (UniqueName: \"kubernetes.io/projected/737dd1a0-476a-493f-bbbc-727d6e8abbc8-kube-api-access-zl4tq\") pod \"kube-proxy-pxpn6\" (UID: \"737dd1a0-476a-493f-bbbc-727d6e8abbc8\") " pod="kube-system/kube-proxy-pxpn6"
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862786    1103 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgmv\" (UniqueName: \"kubernetes.io/projected/9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc-kube-api-access-nhgmv\") pod \"storage-provisioner\" (UID: \"9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc\") " pod="kube-system/storage-provisioner"
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862851    1103 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume\") pod \"coredns-6d4b75cb6d-xg7ls\" (UID: \"7251ded9-bb5d-4f85-8567-ac4475ed9d1e\") " pod="kube-system/coredns-6d4b75cb6d-xg7ls"
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862874    1103 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx6g4\" (UniqueName: \"kubernetes.io/projected/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-kube-api-access-wx6g4\") pod \"coredns-6d4b75cb6d-xg7ls\" (UID: \"7251ded9-bb5d-4f85-8567-ac4475ed9d1e\") " pod="kube-system/coredns-6d4b75cb6d-xg7ls"
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862910    1103 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/737dd1a0-476a-493f-bbbc-727d6e8abbc8-lib-modules\") pod \"kube-proxy-pxpn6\" (UID: \"737dd1a0-476a-493f-bbbc-727d6e8abbc8\") " pod="kube-system/kube-proxy-pxpn6"
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862935    1103 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc-tmp\") pod \"storage-provisioner\" (UID: \"9f4e52c6-c6a4-4917-88d2-e7bfdd69e1cc\") " pod="kube-system/storage-provisioner"
	Dec 12 20:47:14 test-preload-824561 kubelet[1103]: I1212 20:47:14.862946    1103 reconciler.go:159] "Reconciler: start to sync state"
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: I1212 20:47:15.380660    1103 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28a7157e-bf2f-49c3-893c-aff886510769-config-volume\") pod \"28a7157e-bf2f-49c3-893c-aff886510769\" (UID: \"28a7157e-bf2f-49c3-893c-aff886510769\") "
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: I1212 20:47:15.380705    1103 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtqb\" (UniqueName: \"kubernetes.io/projected/28a7157e-bf2f-49c3-893c-aff886510769-kube-api-access-6mtqb\") pod \"28a7157e-bf2f-49c3-893c-aff886510769\" (UID: \"28a7157e-bf2f-49c3-893c-aff886510769\") "
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: E1212 20:47:15.381707    1103 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: E1212 20:47:15.381811    1103 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume podName:7251ded9-bb5d-4f85-8567-ac4475ed9d1e nodeName:}" failed. No retries permitted until 2023-12-12 20:47:15.881783764 +0000 UTC m=+8.228970400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume") pod "coredns-6d4b75cb6d-xg7ls" (UID: "7251ded9-bb5d-4f85-8567-ac4475ed9d1e") : object "kube-system"/"coredns" not registered
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: W1212 20:47:15.383131    1103 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/28a7157e-bf2f-49c3-893c-aff886510769/volumes/kubernetes.io~projected/kube-api-access-6mtqb: clearQuota called, but quotas disabled
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: W1212 20:47:15.383280    1103 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/28a7157e-bf2f-49c3-893c-aff886510769/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: I1212 20:47:15.383656    1103 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a7157e-bf2f-49c3-893c-aff886510769-kube-api-access-6mtqb" (OuterVolumeSpecName: "kube-api-access-6mtqb") pod "28a7157e-bf2f-49c3-893c-aff886510769" (UID: "28a7157e-bf2f-49c3-893c-aff886510769"). InnerVolumeSpecName "kube-api-access-6mtqb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: I1212 20:47:15.383921    1103 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a7157e-bf2f-49c3-893c-aff886510769-config-volume" (OuterVolumeSpecName: "config-volume") pod "28a7157e-bf2f-49c3-893c-aff886510769" (UID: "28a7157e-bf2f-49c3-893c-aff886510769"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: I1212 20:47:15.482070    1103 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28a7157e-bf2f-49c3-893c-aff886510769-config-volume\") on node \"test-preload-824561\" DevicePath \"\""
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: I1212 20:47:15.482106    1103 reconciler.go:384] "Volume detached for volume \"kube-api-access-6mtqb\" (UniqueName: \"kubernetes.io/projected/28a7157e-bf2f-49c3-893c-aff886510769-kube-api-access-6mtqb\") on node \"test-preload-824561\" DevicePath \"\""
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: E1212 20:47:15.885110    1103 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:47:15 test-preload-824561 kubelet[1103]: E1212 20:47:15.885205    1103 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume podName:7251ded9-bb5d-4f85-8567-ac4475ed9d1e nodeName:}" failed. No retries permitted until 2023-12-12 20:47:16.885189486 +0000 UTC m=+9.232376123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume") pod "coredns-6d4b75cb6d-xg7ls" (UID: "7251ded9-bb5d-4f85-8567-ac4475ed9d1e") : object "kube-system"/"coredns" not registered
	Dec 12 20:47:16 test-preload-824561 kubelet[1103]: E1212 20:47:16.891626    1103 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:47:16 test-preload-824561 kubelet[1103]: E1212 20:47:16.891725    1103 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume podName:7251ded9-bb5d-4f85-8567-ac4475ed9d1e nodeName:}" failed. No retries permitted until 2023-12-12 20:47:18.891705472 +0000 UTC m=+11.238892100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume") pod "coredns-6d4b75cb6d-xg7ls" (UID: "7251ded9-bb5d-4f85-8567-ac4475ed9d1e") : object "kube-system"/"coredns" not registered
	Dec 12 20:47:16 test-preload-824561 kubelet[1103]: E1212 20:47:16.925568    1103 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xg7ls" podUID=7251ded9-bb5d-4f85-8567-ac4475ed9d1e
	Dec 12 20:47:18 test-preload-824561 kubelet[1103]: E1212 20:47:18.906552    1103 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:47:18 test-preload-824561 kubelet[1103]: E1212 20:47:18.906674    1103 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume podName:7251ded9-bb5d-4f85-8567-ac4475ed9d1e nodeName:}" failed. No retries permitted until 2023-12-12 20:47:22.906608896 +0000 UTC m=+15.253795523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7251ded9-bb5d-4f85-8567-ac4475ed9d1e-config-volume") pod "coredns-6d4b75cb6d-xg7ls" (UID: "7251ded9-bb5d-4f85-8567-ac4475ed9d1e") : object "kube-system"/"coredns" not registered
	Dec 12 20:47:19 test-preload-824561 kubelet[1103]: I1212 20:47:19.932458    1103 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=28a7157e-bf2f-49c3-893c-aff886510769 path="/var/lib/kubelet/pods/28a7157e-bf2f-49c3-893c-aff886510769/volumes"
	
	
	==> storage-provisioner [5f1f7e9a577fc1d253cb7f1d60c3c119afe3c10f0793b002831ce68899da8fe0] <==
	I1212 20:47:16.934556       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-824561 -n test-preload-824561
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-824561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-824561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-824561
--- FAIL: TestPreload (186.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (167.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1764823453.exe start -p running-upgrade-317982 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1212 20:49:39.384678   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:49:51.928840   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1764823453.exe start -p running-upgrade-317982 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m17.599736393s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-317982 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-317982 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (26.343619146s)

                                                
                                                
-- stdout --
	* [running-upgrade-317982] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-317982 in cluster running-upgrade-317982
	* Updating the running kvm2 "running-upgrade-317982" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:51:49.302686   41349 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:51:49.302890   41349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:51:49.302900   41349 out.go:309] Setting ErrFile to fd 2...
	I1212 20:51:49.302908   41349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:51:49.303214   41349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:51:49.303982   41349 out.go:303] Setting JSON to false
	I1212 20:51:49.305292   41349 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5663,"bootTime":1702408646,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:51:49.305381   41349 start.go:138] virtualization: kvm guest
	I1212 20:51:49.307650   41349 out.go:177] * [running-upgrade-317982] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:51:49.309232   41349 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:51:49.309241   41349 notify.go:220] Checking for updates...
	I1212 20:51:49.310733   41349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:51:49.313675   41349 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:51:49.315291   41349 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:51:49.317447   41349 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:51:49.321249   41349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:51:49.323809   41349 config.go:182] Loaded profile config "running-upgrade-317982": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 20:51:49.323878   41349 start_flags.go:694] config upgrade: Driver=kvm2
	I1212 20:51:49.323902   41349 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 20:51:49.324032   41349 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/running-upgrade-317982/config.json ...
	I1212 20:51:49.324856   41349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:51:49.324948   41349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:51:49.351386   41349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I1212 20:51:49.355794   41349 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:51:49.356667   41349 main.go:141] libmachine: Using API Version  1
	I1212 20:51:49.356688   41349 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:51:49.357108   41349 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:51:49.357320   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:51:49.360182   41349 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 20:51:49.361864   41349 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:51:49.362304   41349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:51:49.362351   41349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:51:49.383625   41349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I1212 20:51:49.384059   41349 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:51:49.384556   41349 main.go:141] libmachine: Using API Version  1
	I1212 20:51:49.384605   41349 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:51:49.384976   41349 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:51:49.385166   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:51:49.439821   41349 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 20:51:49.443851   41349 start.go:298] selected driver: kvm2
	I1212 20:51:49.443871   41349 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-317982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.189 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 20:51:49.443963   41349 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:51:49.444602   41349 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.444673   41349 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 20:51:49.467445   41349 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 20:51:49.467931   41349 cni.go:84] Creating CNI manager for ""
	I1212 20:51:49.467954   41349 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 20:51:49.467966   41349 start_flags.go:323] config:
	{Name:running-upgrade-317982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.189 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 20:51:49.468191   41349 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.470190   41349 out.go:177] * Starting control plane node running-upgrade-317982 in cluster running-upgrade-317982
	I1212 20:51:49.471521   41349 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1212 20:51:49.494060   41349 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 20:51:49.494242   41349 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/running-upgrade-317982/config.json ...
	I1212 20:51:49.494587   41349 start.go:365] acquiring machines lock for running-upgrade-317982: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:51:49.494785   41349 cache.go:107] acquiring lock: {Name:mkc5b941ea8587c1bc8a54665a516a88675d8edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.494860   41349 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 20:51:49.494881   41349 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.55µs
	I1212 20:51:49.494892   41349 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 20:51:49.494907   41349 cache.go:107] acquiring lock: {Name:mk0f133ad78118e2a5c11940f155b90bfadc732c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.495018   41349 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1212 20:51:49.495208   41349 cache.go:107] acquiring lock: {Name:mk91c151f9f156e36f05440589a9ff8ef1e7e8de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.495257   41349 cache.go:107] acquiring lock: {Name:mk6b58b49dcf3512ee1b43881ab2cc941ee27bd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.495353   41349 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 20:51:49.495365   41349 cache.go:107] acquiring lock: {Name:mkedbf2a35fa1faf8fe6f4f30a20a6ab2821720d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.495406   41349 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1212 20:51:49.495435   41349 cache.go:107] acquiring lock: {Name:mkd79b9a919ae2b3e78cc4de74d2b8724d9b88e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.495499   41349 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1212 20:51:49.495209   41349 cache.go:107] acquiring lock: {Name:mkd1dd20e6786ecda1ff8576afd1ff735d553b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.495592   41349 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 20:51:49.495596   41349 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1212 20:51:49.495593   41349 cache.go:107] acquiring lock: {Name:mkdadc1bbd326b2134b9eb05edd70eb6f97fe04f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:51:49.498887   41349 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1212 20:51:49.499618   41349 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 20:51:49.499615   41349 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1212 20:51:49.499938   41349 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 20:51:49.500605   41349 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1212 20:51:49.500861   41349 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1212 20:51:49.500931   41349 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1212 20:51:49.500926   41349 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1212 20:51:49.674545   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 20:51:49.687591   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 20:51:49.720404   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1212 20:51:49.740611   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1212 20:51:49.742450   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1212 20:51:49.764081   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1212 20:51:49.793438   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1212 20:51:49.793467   41349 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 298.262859ms
	I1212 20:51:49.793485   41349 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1212 20:51:49.806675   41349 cache.go:162] opening:  /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1212 20:51:50.290728   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1212 20:51:50.290767   41349 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 795.407894ms
	I1212 20:51:50.290782   41349 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1212 20:51:50.711334   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1212 20:51:50.711427   41349 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.216010444s
	I1212 20:51:50.711464   41349 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1212 20:51:50.773079   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1212 20:51:50.773113   41349 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.277523773s
	I1212 20:51:50.773129   41349 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1212 20:51:51.097102   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1212 20:51:51.097136   41349 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.602230319s
	I1212 20:51:51.097148   41349 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1212 20:51:51.278104   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 20:51:51.278131   41349 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.782874963s
	I1212 20:51:51.278142   41349 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 20:51:51.579360   41349 cache.go:157] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1212 20:51:51.579394   41349 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.084195123s
	I1212 20:51:51.579407   41349 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1212 20:51:51.579430   41349 cache.go:87] Successfully saved all images to host disk.
	I1212 20:52:12.052521   41349 start.go:369] acquired machines lock for "running-upgrade-317982" in 22.557902539s
	I1212 20:52:12.052565   41349 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:52:12.052573   41349 fix.go:54] fixHost starting: minikube
	I1212 20:52:12.052976   41349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:52:12.053012   41349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:52:12.070251   41349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
	I1212 20:52:12.070721   41349 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:52:12.071214   41349 main.go:141] libmachine: Using API Version  1
	I1212 20:52:12.071231   41349 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:52:12.071609   41349 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:52:12.071790   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:12.071909   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetState
	I1212 20:52:12.073584   41349 fix.go:102] recreateIfNeeded on running-upgrade-317982: state=Running err=<nil>
	W1212 20:52:12.073620   41349 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:52:12.075887   41349 out.go:177] * Updating the running kvm2 "running-upgrade-317982" VM ...
	I1212 20:52:12.077894   41349 machine.go:88] provisioning docker machine ...
	I1212 20:52:12.077925   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:12.078153   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetMachineName
	I1212 20:52:12.078334   41349 buildroot.go:166] provisioning hostname "running-upgrade-317982"
	I1212 20:52:12.078359   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetMachineName
	I1212 20:52:12.078493   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:12.081367   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.081754   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:12.081786   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.081901   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:12.082058   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.082231   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.082404   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:12.082547   41349 main.go:141] libmachine: Using SSH client type: native
	I1212 20:52:12.083121   41349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.189 22 <nil> <nil>}
	I1212 20:52:12.083153   41349 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-317982 && echo "running-upgrade-317982" | sudo tee /etc/hostname
	I1212 20:52:12.213432   41349 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-317982
	
	I1212 20:52:12.213504   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:12.216792   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.217223   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:12.217270   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.217513   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:12.217724   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.217925   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.218103   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:12.218267   41349 main.go:141] libmachine: Using SSH client type: native
	I1212 20:52:12.218698   41349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.189 22 <nil> <nil>}
	I1212 20:52:12.218720   41349 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-317982' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-317982/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-317982' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:52:12.339937   41349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:52:12.339974   41349 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:52:12.339996   41349 buildroot.go:174] setting up certificates
	I1212 20:52:12.340008   41349 provision.go:83] configureAuth start
	I1212 20:52:12.340025   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetMachineName
	I1212 20:52:12.340335   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetIP
	I1212 20:52:12.343120   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.343496   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:12.343526   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.343682   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:12.345870   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.346268   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:12.346302   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.346494   41349 provision.go:138] copyHostCerts
	I1212 20:52:12.346555   41349 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:52:12.346568   41349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:52:12.346626   41349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:52:12.346725   41349 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:52:12.346734   41349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:52:12.346763   41349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:52:12.346843   41349 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:52:12.346853   41349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:52:12.346878   41349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:52:12.346945   41349 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-317982 san=[192.168.50.189 192.168.50.189 localhost 127.0.0.1 minikube running-upgrade-317982]
	I1212 20:52:12.499335   41349 provision.go:172] copyRemoteCerts
	I1212 20:52:12.499414   41349 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:52:12.499450   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:12.502494   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.502910   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:12.502946   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.503080   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:12.503299   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.503517   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:12.503687   41349 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/running-upgrade-317982/id_rsa Username:docker}
	I1212 20:52:12.590404   41349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:52:12.609976   41349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 20:52:12.625648   41349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:52:12.639196   41349 provision.go:86] duration metric: configureAuth took 299.174324ms
	I1212 20:52:12.639223   41349 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:52:12.639473   41349 config.go:182] Loaded profile config "running-upgrade-317982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 20:52:12.639543   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:12.642313   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.642746   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:12.642768   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:12.642967   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:12.643167   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.643380   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:12.643515   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:12.643663   41349 main.go:141] libmachine: Using SSH client type: native
	I1212 20:52:12.644029   41349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.189 22 <nil> <nil>}
	I1212 20:52:12.644059   41349 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:52:13.289308   41349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:52:13.289334   41349 machine.go:91] provisioned docker machine in 1.211421393s
	I1212 20:52:13.289345   41349 start.go:300] post-start starting for "running-upgrade-317982" (driver="kvm2")
	I1212 20:52:13.289355   41349 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:52:13.289392   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:13.289670   41349 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:52:13.289698   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:13.292984   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.293441   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:13.293472   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.293665   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:13.293873   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:13.294054   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:13.294276   41349 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/running-upgrade-317982/id_rsa Username:docker}
	I1212 20:52:13.380361   41349 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:52:13.386213   41349 info.go:137] Remote host: Buildroot 2019.02.7
	I1212 20:52:13.386244   41349 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:52:13.386343   41349 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:52:13.386456   41349 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:52:13.386592   41349 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:52:13.396576   41349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:52:13.421162   41349 start.go:303] post-start completed in 131.800859ms
	I1212 20:52:13.421192   41349 fix.go:56] fixHost completed within 1.368618136s
	I1212 20:52:13.421219   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:13.424686   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.425133   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:13.425207   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.425479   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:13.425703   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:13.425923   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:13.426115   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:13.426317   41349 main.go:141] libmachine: Using SSH client type: native
	I1212 20:52:13.426728   41349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.189 22 <nil> <nil>}
	I1212 20:52:13.426747   41349 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 20:52:13.545861   41349 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702414333.541535727
	
	I1212 20:52:13.545940   41349 fix.go:206] guest clock: 1702414333.541535727
	I1212 20:52:13.545961   41349 fix.go:219] Guest: 2023-12-12 20:52:13.541535727 +0000 UTC Remote: 2023-12-12 20:52:13.421197269 +0000 UTC m=+24.194087072 (delta=120.338458ms)
	I1212 20:52:13.546023   41349 fix.go:190] guest clock delta is within tolerance: 120.338458ms
	I1212 20:52:13.546040   41349 start.go:83] releasing machines lock for "running-upgrade-317982", held for 1.493494034s
	I1212 20:52:13.548668   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:13.549011   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetIP
	I1212 20:52:13.552105   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.552500   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:13.552531   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.552705   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:13.553314   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:13.553497   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .DriverName
	I1212 20:52:13.553608   41349 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:52:13.553653   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:13.553873   41349 ssh_runner.go:195] Run: cat /version.json
	I1212 20:52:13.553897   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHHostname
	I1212 20:52:13.556816   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.556936   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.557192   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:13.557219   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.557671   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:7b:dd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:50:03 +0000 UTC Type:0 Mac:52:54:00:51:7b:dd Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:running-upgrade-317982 Clientid:01:52:54:00:51:7b:dd}
	I1212 20:52:13.557716   41349 main.go:141] libmachine: (running-upgrade-317982) DBG | domain running-upgrade-317982 has defined IP address 192.168.50.189 and MAC address 52:54:00:51:7b:dd in network minikube-net
	I1212 20:52:13.557729   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:13.557968   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:13.557995   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHPort
	I1212 20:52:13.558088   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:13.558133   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHKeyPath
	I1212 20:52:13.558225   41349 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/running-upgrade-317982/id_rsa Username:docker}
	I1212 20:52:13.558877   41349 main.go:141] libmachine: (running-upgrade-317982) Calling .GetSSHUsername
	I1212 20:52:13.559069   41349 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/running-upgrade-317982/id_rsa Username:docker}
	W1212 20:52:13.676565   41349 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 20:52:13.676663   41349 ssh_runner.go:195] Run: systemctl --version
	I1212 20:52:13.682777   41349 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:52:13.846333   41349 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:52:13.855332   41349 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:52:13.855411   41349 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:52:13.866928   41349 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:52:13.866963   41349 start.go:475] detecting cgroup driver to use...
	I1212 20:52:13.867030   41349 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:52:13.886299   41349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:52:13.905131   41349 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:52:13.905205   41349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:52:13.917784   41349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:52:13.929843   41349 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 20:52:13.940628   41349 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 20:52:13.940690   41349 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:52:14.097155   41349 docker.go:219] disabling docker service ...
	I1212 20:52:14.097218   41349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:52:15.136618   41349 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.039371277s)
	I1212 20:52:15.136688   41349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:52:15.158342   41349 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:52:15.339400   41349 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:52:15.518631   41349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:52:15.531378   41349 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:52:15.546417   41349 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 20:52:15.546485   41349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:52:15.555328   41349 out.go:177] 
	W1212 20:52:15.557450   41349 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 20:52:15.557469   41349 out.go:239] * 
	* 
	W1212 20:52:15.558388   41349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:52:15.560045   41349 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-317982 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-12 20:52:15.582711121 +0000 UTC m=+3336.802883706
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-317982 -n running-upgrade-317982
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-317982 -n running-upgrade-317982: exit status 4 (374.580192ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:52:15.853808   41802 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-317982" does not appear in /home/jenkins/minikube-integration/17734-9188/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-317982" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-317982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-317982
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-317982: (1.926978554s)
--- FAIL: TestRunningBinaryUpgrade (167.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (290.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1553023058.exe start -p stopped-upgrade-709141 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1553023058.exe start -p stopped-upgrade-709141 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.554237371s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1553023058.exe -p stopped-upgrade-709141 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1553023058.exe -p stopped-upgrade-709141 stop: (1m33.054856468s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-709141 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-709141 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m7.527617813s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-709141] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-709141 in cluster stopped-upgrade-709141
	* Restarting existing kvm2 VM for "stopped-upgrade-709141" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:55:03.441229   46139 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:55:03.441379   46139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:55:03.441389   46139 out.go:309] Setting ErrFile to fd 2...
	I1212 20:55:03.441398   46139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:55:03.441626   46139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:55:03.442239   46139 out.go:303] Setting JSON to false
	I1212 20:55:03.443187   46139 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5857,"bootTime":1702408646,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:55:03.443272   46139 start.go:138] virtualization: kvm guest
	I1212 20:55:03.445656   46139 out.go:177] * [stopped-upgrade-709141] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:55:03.447777   46139 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:55:03.447804   46139 notify.go:220] Checking for updates...
	I1212 20:55:03.449346   46139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:55:03.450834   46139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:55:03.452406   46139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:55:03.453922   46139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:55:03.455340   46139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:55:03.457199   46139 config.go:182] Loaded profile config "stopped-upgrade-709141": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 20:55:03.457218   46139 start_flags.go:694] config upgrade: Driver=kvm2
	I1212 20:55:03.457226   46139 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 20:55:03.457298   46139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/stopped-upgrade-709141/config.json ...
	I1212 20:55:03.457864   46139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:55:03.457908   46139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:55:03.473182   46139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1212 20:55:03.473577   46139 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:55:03.474091   46139 main.go:141] libmachine: Using API Version  1
	I1212 20:55:03.474124   46139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:55:03.474434   46139 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:55:03.474619   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:55:03.476965   46139 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 20:55:03.478431   46139 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:55:03.478861   46139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:55:03.478911   46139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:55:03.493612   46139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I1212 20:55:03.494066   46139 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:55:03.494624   46139 main.go:141] libmachine: Using API Version  1
	I1212 20:55:03.494660   46139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:55:03.494995   46139 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:55:03.495188   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:55:03.534114   46139 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 20:55:03.535661   46139 start.go:298] selected driver: kvm2
	I1212 20:55:03.535674   46139 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-709141 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.186 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 20:55:03.535788   46139 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:55:03.536505   46139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.536585   46139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 20:55:03.551484   46139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 20:55:03.551875   46139 cni.go:84] Creating CNI manager for ""
	I1212 20:55:03.551899   46139 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 20:55:03.551908   46139 start_flags.go:323] config:
	{Name:stopped-upgrade-709141 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.186 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 20:55:03.552102   46139 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.554197   46139 out.go:177] * Starting control plane node stopped-upgrade-709141 in cluster stopped-upgrade-709141
	I1212 20:55:03.555543   46139 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1212 20:55:03.582192   46139 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 20:55:03.582350   46139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/stopped-upgrade-709141/config.json ...
	I1212 20:55:03.582465   46139 cache.go:107] acquiring lock: {Name:mkc5b941ea8587c1bc8a54665a516a88675d8edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582470   46139 cache.go:107] acquiring lock: {Name:mk0f133ad78118e2a5c11940f155b90bfadc732c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582507   46139 cache.go:107] acquiring lock: {Name:mk6b58b49dcf3512ee1b43881ab2cc941ee27bd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582569   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 20:55:03.582593   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 20:55:03.582561   46139 cache.go:107] acquiring lock: {Name:mkd1dd20e6786ecda1ff8576afd1ff735d553b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582590   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1212 20:55:03.582626   46139 cache.go:107] acquiring lock: {Name:mkedbf2a35fa1faf8fe6f4f30a20a6ab2821720d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582640   46139 cache.go:107] acquiring lock: {Name:mkd79b9a919ae2b3e78cc4de74d2b8724d9b88e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582606   46139 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 100.59µs
	I1212 20:55:03.582679   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1212 20:55:03.582675   46139 start.go:365] acquiring machines lock for stopped-upgrade-709141: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:55:03.582661   46139 cache.go:107] acquiring lock: {Name:mk91c151f9f156e36f05440589a9ff8ef1e7e8de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582688   46139 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 181.9µs
	I1212 20:55:03.582700   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1212 20:55:03.582706   46139 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1212 20:55:03.582681   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1212 20:55:03.582641   46139 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 180.431µs
	I1212 20:55:03.582722   46139 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 99.277µs
	I1212 20:55:03.582731   46139 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1212 20:55:03.582735   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1212 20:55:03.582467   46139 cache.go:107] acquiring lock: {Name:mkdadc1bbd326b2134b9eb05edd70eb6f97fe04f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:55:03.582586   46139 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 135.205µs
	I1212 20:55:03.582768   46139 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 20:55:03.582773   46139 cache.go:115] /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1212 20:55:03.582716   46139 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 78.157µs
	I1212 20:55:03.582783   46139 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1212 20:55:03.582751   46139 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 91.718µs
	I1212 20:55:03.582783   46139 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 331.522µs
	I1212 20:55:03.582790   46139 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1212 20:55:03.582794   46139 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1212 20:55:03.582759   46139 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1212 20:55:03.582676   46139 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 20:55:03.582808   46139 cache.go:87] Successfully saved all images to host disk.
	I1212 20:55:27.084608   46139 start.go:369] acquired machines lock for "stopped-upgrade-709141" in 23.501907074s
	I1212 20:55:27.084673   46139 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:55:27.084682   46139 fix.go:54] fixHost starting: minikube
	I1212 20:55:27.085181   46139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:55:27.085266   46139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:55:27.103414   46139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I1212 20:55:27.103928   46139 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:55:27.104493   46139 main.go:141] libmachine: Using API Version  1
	I1212 20:55:27.104515   46139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:55:27.104902   46139 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:55:27.105093   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:55:27.105265   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetState
	I1212 20:55:27.107344   46139 fix.go:102] recreateIfNeeded on stopped-upgrade-709141: state=Stopped err=<nil>
	I1212 20:55:27.107372   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	W1212 20:55:27.107519   46139 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 20:55:27.109414   46139 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-709141" ...
	I1212 20:55:27.110858   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .Start
	I1212 20:55:27.111046   46139 main.go:141] libmachine: (stopped-upgrade-709141) Ensuring networks are active...
	I1212 20:55:27.111780   46139 main.go:141] libmachine: (stopped-upgrade-709141) Ensuring network default is active
	I1212 20:55:27.112224   46139 main.go:141] libmachine: (stopped-upgrade-709141) Ensuring network minikube-net is active
	I1212 20:55:27.112738   46139 main.go:141] libmachine: (stopped-upgrade-709141) Getting domain xml...
	I1212 20:55:27.113467   46139 main.go:141] libmachine: (stopped-upgrade-709141) Creating domain...
	I1212 20:55:28.630495   46139 main.go:141] libmachine: (stopped-upgrade-709141) Waiting to get IP...
	I1212 20:55:28.631552   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:28.632147   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:28.632175   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:28.632008   46329 retry.go:31] will retry after 211.719228ms: waiting for machine to come up
	I1212 20:55:28.845667   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:28.846173   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:28.846194   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:28.846159   46329 retry.go:31] will retry after 368.171167ms: waiting for machine to come up
	I1212 20:55:29.215721   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:29.216387   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:29.216415   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:29.216278   46329 retry.go:31] will retry after 309.980619ms: waiting for machine to come up
	I1212 20:55:29.527642   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:29.528327   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:29.528362   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:29.528224   46329 retry.go:31] will retry after 374.268335ms: waiting for machine to come up
	I1212 20:55:29.903940   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:29.904653   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:29.904680   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:29.904545   46329 retry.go:31] will retry after 652.326018ms: waiting for machine to come up
	I1212 20:55:30.558411   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:30.559009   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:30.559032   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:30.558947   46329 retry.go:31] will retry after 952.159567ms: waiting for machine to come up
	I1212 20:55:31.512601   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:31.513097   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:31.513136   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:31.513051   46329 retry.go:31] will retry after 765.143743ms: waiting for machine to come up
	I1212 20:55:32.279445   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:32.279990   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:32.280014   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:32.279938   46329 retry.go:31] will retry after 1.25291184s: waiting for machine to come up
	I1212 20:55:33.534229   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:33.534769   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:33.534798   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:33.534712   46329 retry.go:31] will retry after 1.725468116s: waiting for machine to come up
	I1212 20:55:35.261942   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:35.262472   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:35.262516   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:35.262413   46329 retry.go:31] will retry after 1.712492519s: waiting for machine to come up
	I1212 20:55:37.299341   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:37.299891   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:37.299916   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:37.299833   46329 retry.go:31] will retry after 1.928561906s: waiting for machine to come up
	I1212 20:55:39.230094   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:39.230572   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:39.230606   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:39.230528   46329 retry.go:31] will retry after 3.53447522s: waiting for machine to come up
	I1212 20:55:42.766749   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:42.767310   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:42.767342   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:42.767278   46329 retry.go:31] will retry after 3.577253457s: waiting for machine to come up
	I1212 20:55:46.348973   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:46.349469   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:46.349502   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:46.349396   46329 retry.go:31] will retry after 5.36644962s: waiting for machine to come up
	I1212 20:55:51.720793   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:51.721327   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:51.721359   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:51.721269   46329 retry.go:31] will retry after 4.527081841s: waiting for machine to come up
	I1212 20:55:56.252508   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:55:56.253150   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | unable to find current IP address of domain stopped-upgrade-709141 in network minikube-net
	I1212 20:55:56.253183   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | I1212 20:55:56.253069   46329 retry.go:31] will retry after 5.770888204s: waiting for machine to come up
	I1212 20:56:02.025104   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.025567   46139 main.go:141] libmachine: (stopped-upgrade-709141) Found IP for machine: 192.168.50.186
	I1212 20:56:02.025593   46139 main.go:141] libmachine: (stopped-upgrade-709141) Reserving static IP address...
	I1212 20:56:02.025628   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has current primary IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.026003   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "stopped-upgrade-709141", mac: "52:54:00:68:e3:5a", ip: "192.168.50.186"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.026032   46139 main.go:141] libmachine: (stopped-upgrade-709141) Reserved static IP address: 192.168.50.186
	I1212 20:56:02.026049   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-709141", mac: "52:54:00:68:e3:5a", ip: "192.168.50.186"}
	I1212 20:56:02.026063   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | Getting to WaitForSSH function...
	I1212 20:56:02.026081   46139 main.go:141] libmachine: (stopped-upgrade-709141) Waiting for SSH to be available...
	I1212 20:56:02.028304   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.028655   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.028704   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.028813   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | Using SSH client type: external
	I1212 20:56:02.028838   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/stopped-upgrade-709141/id_rsa (-rw-------)
	I1212 20:56:02.028899   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/stopped-upgrade-709141/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 20:56:02.028928   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | About to run SSH command:
	I1212 20:56:02.028946   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | exit 0
	I1212 20:56:02.154856   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | SSH cmd err, output: <nil>: 
	I1212 20:56:02.155198   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetConfigRaw
	I1212 20:56:02.155840   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetIP
	I1212 20:56:02.158691   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.159180   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.159211   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.159451   46139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/stopped-upgrade-709141/config.json ...
	I1212 20:56:02.159681   46139 machine.go:88] provisioning docker machine ...
	I1212 20:56:02.159706   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:56:02.159938   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetMachineName
	I1212 20:56:02.160163   46139 buildroot.go:166] provisioning hostname "stopped-upgrade-709141"
	I1212 20:56:02.160187   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetMachineName
	I1212 20:56:02.160341   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:02.162607   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.163056   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.163082   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.163261   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:02.163452   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.163616   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.163753   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:02.163955   46139 main.go:141] libmachine: Using SSH client type: native
	I1212 20:56:02.164338   46139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I1212 20:56:02.164356   46139 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-709141 && echo "stopped-upgrade-709141" | sudo tee /etc/hostname
	I1212 20:56:02.286297   46139 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-709141
	
	I1212 20:56:02.286343   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:02.289226   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.289601   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.289627   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.289790   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:02.289996   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.290182   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.290340   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:02.290501   46139 main.go:141] libmachine: Using SSH client type: native
	I1212 20:56:02.290841   46139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I1212 20:56:02.290868   46139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-709141' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-709141/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-709141' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:56:02.403662   46139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:56:02.403695   46139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 20:56:02.403749   46139 buildroot.go:174] setting up certificates
	I1212 20:56:02.403778   46139 provision.go:83] configureAuth start
	I1212 20:56:02.403791   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetMachineName
	I1212 20:56:02.404077   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetIP
	I1212 20:56:02.407015   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.407489   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.407520   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.407726   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:02.409993   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.410347   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.410387   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.410571   46139 provision.go:138] copyHostCerts
	I1212 20:56:02.410615   46139 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 20:56:02.410624   46139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 20:56:02.410703   46139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 20:56:02.410834   46139 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 20:56:02.410851   46139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 20:56:02.410884   46139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 20:56:02.410957   46139 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 20:56:02.410966   46139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 20:56:02.410988   46139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 20:56:02.411031   46139 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-709141 san=[192.168.50.186 192.168.50.186 localhost 127.0.0.1 minikube stopped-upgrade-709141]
	I1212 20:56:02.596559   46139 provision.go:172] copyRemoteCerts
	I1212 20:56:02.596627   46139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:56:02.596650   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:02.599629   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.600013   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.600047   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.600272   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:02.600481   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.600655   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:02.600822   46139 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/stopped-upgrade-709141/id_rsa Username:docker}
	I1212 20:56:02.688305   46139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:56:02.705456   46139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 20:56:02.721759   46139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:56:02.736235   46139 provision.go:86] duration metric: configureAuth took 332.427539ms
	I1212 20:56:02.736269   46139 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:56:02.736476   46139 config.go:182] Loaded profile config "stopped-upgrade-709141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 20:56:02.736577   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:02.739745   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.740168   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:02.740203   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:02.740391   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:02.740582   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.740794   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:02.740981   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:02.741182   46139 main.go:141] libmachine: Using SSH client type: native
	I1212 20:56:02.741514   46139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I1212 20:56:02.741535   46139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:56:09.969274   46139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:56:09.969297   46139 machine.go:91] provisioned docker machine in 7.809600788s
	I1212 20:56:09.969310   46139 start.go:300] post-start starting for "stopped-upgrade-709141" (driver="kvm2")
	I1212 20:56:09.969322   46139 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:56:09.969341   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:56:09.969656   46139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:56:09.969682   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:09.972482   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:09.972846   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:09.972871   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:09.973047   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:09.973265   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:09.973441   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:09.973597   46139 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/stopped-upgrade-709141/id_rsa Username:docker}
	I1212 20:56:10.054384   46139 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:56:10.060263   46139 info.go:137] Remote host: Buildroot 2019.02.7
	I1212 20:56:10.060293   46139 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 20:56:10.060364   46139 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 20:56:10.060457   46139 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 20:56:10.060575   46139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:56:10.066992   46139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 20:56:10.083101   46139 start.go:303] post-start completed in 113.774448ms
	I1212 20:56:10.083135   46139 fix.go:56] fixHost completed within 42.998452617s
	I1212 20:56:10.083158   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:10.085987   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.086427   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:10.086462   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.086632   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:10.086852   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:10.087001   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:10.087132   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:10.087403   46139 main.go:141] libmachine: Using SSH client type: native
	I1212 20:56:10.087758   46139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I1212 20:56:10.087771   46139 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 20:56:10.195937   46139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702414570.132688461
	
	I1212 20:56:10.195964   46139 fix.go:206] guest clock: 1702414570.132688461
	I1212 20:56:10.195974   46139 fix.go:219] Guest: 2023-12-12 20:56:10.132688461 +0000 UTC Remote: 2023-12-12 20:56:10.083139198 +0000 UTC m=+66.693607386 (delta=49.549263ms)
	I1212 20:56:10.196023   46139 fix.go:190] guest clock delta is within tolerance: 49.549263ms
	I1212 20:56:10.196031   46139 start.go:83] releasing machines lock for "stopped-upgrade-709141", held for 43.11138295s
	I1212 20:56:10.196061   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:56:10.196334   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetIP
	I1212 20:56:10.199150   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.199648   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:10.199675   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.199823   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:56:10.200337   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:56:10.200538   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .DriverName
	I1212 20:56:10.200650   46139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:56:10.200691   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:10.200746   46139 ssh_runner.go:195] Run: cat /version.json
	I1212 20:56:10.200769   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHHostname
	I1212 20:56:10.203433   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.203630   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.203830   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:10.203887   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.203990   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:10.204092   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:e3:5a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-12 21:55:55 +0000 UTC Type:0 Mac:52:54:00:68:e3:5a Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:stopped-upgrade-709141 Clientid:01:52:54:00:68:e3:5a}
	I1212 20:56:10.204125   46139 main.go:141] libmachine: (stopped-upgrade-709141) DBG | domain stopped-upgrade-709141 has defined IP address 192.168.50.186 and MAC address 52:54:00:68:e3:5a in network minikube-net
	I1212 20:56:10.204152   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:10.204259   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:10.204408   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHPort
	I1212 20:56:10.204412   46139 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/stopped-upgrade-709141/id_rsa Username:docker}
	I1212 20:56:10.204573   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHKeyPath
	I1212 20:56:10.204741   46139 main.go:141] libmachine: (stopped-upgrade-709141) Calling .GetSSHUsername
	I1212 20:56:10.204900   46139 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/stopped-upgrade-709141/id_rsa Username:docker}
	W1212 20:56:10.320421   46139 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 20:56:10.320501   46139 ssh_runner.go:195] Run: systemctl --version
	I1212 20:56:10.325363   46139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:56:10.492057   46139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:56:10.497901   46139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:56:10.497986   46139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:56:10.503506   46139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:56:10.503529   46139 start.go:475] detecting cgroup driver to use...
	I1212 20:56:10.503585   46139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:56:10.516267   46139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:56:10.526217   46139 docker.go:203] disabling cri-docker service (if available) ...
	I1212 20:56:10.526276   46139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:56:10.534992   46139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:56:10.543301   46139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 20:56:10.551788   46139 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 20:56:10.551846   46139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:56:10.656406   46139 docker.go:219] disabling docker service ...
	I1212 20:56:10.656474   46139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:56:10.670403   46139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:56:10.678645   46139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:56:10.778628   46139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:56:10.872321   46139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:56:10.882163   46139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:56:10.894469   46139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 20:56:10.894548   46139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:56:10.903362   46139 out.go:177] 
	W1212 20:56:10.905039   46139 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 20:56:10.905070   46139 out.go:239] * 
	* 
	W1212 20:56:10.905929   46139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:56:10.908432   46139 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-709141 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (290.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-343495 --alsologtostderr -v=3
E1212 21:01:55.938962   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-343495 --alsologtostderr -v=3: exit status 82 (2m1.768460657s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-343495"  ...
	* Stopping node "no-preload-343495"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:01:55.090394   59607 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:01:55.090667   59607 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:01:55.090677   59607 out.go:309] Setting ErrFile to fd 2...
	I1212 21:01:55.090682   59607 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:01:55.090939   59607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:01:55.091234   59607 out.go:303] Setting JSON to false
	I1212 21:01:55.091360   59607 mustload.go:65] Loading cluster: no-preload-343495
	I1212 21:01:55.091781   59607 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:01:55.091871   59607 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/config.json ...
	I1212 21:01:55.092058   59607 mustload.go:65] Loading cluster: no-preload-343495
	I1212 21:01:55.092218   59607 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:01:55.092268   59607 stop.go:39] StopHost: no-preload-343495
	I1212 21:01:55.092826   59607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:01:55.092876   59607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:01:55.107064   59607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45425
	I1212 21:01:55.107567   59607 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:01:55.108232   59607 main.go:141] libmachine: Using API Version  1
	I1212 21:01:55.108269   59607 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:01:55.108688   59607 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:01:55.111666   59607 out.go:177] * Stopping node "no-preload-343495"  ...
	I1212 21:01:55.113293   59607 main.go:141] libmachine: Stopping "no-preload-343495"...
	I1212 21:01:55.113335   59607 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:01:55.115348   59607 main.go:141] libmachine: (no-preload-343495) Calling .Stop
	I1212 21:01:55.119410   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 0/60
	I1212 21:01:56.120951   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 1/60
	I1212 21:01:57.122450   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 2/60
	I1212 21:01:58.124882   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 3/60
	I1212 21:01:59.126205   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 4/60
	I1212 21:02:00.128595   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 5/60
	I1212 21:02:01.130054   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 6/60
	I1212 21:02:02.131563   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 7/60
	I1212 21:02:03.133970   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 8/60
	I1212 21:02:04.136427   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 9/60
	I1212 21:02:05.137952   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 10/60
	I1212 21:02:06.139913   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 11/60
	I1212 21:02:07.141935   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 12/60
	I1212 21:02:08.143220   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 13/60
	I1212 21:02:09.144776   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 14/60
	I1212 21:02:10.146490   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 15/60
	I1212 21:02:11.147907   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 16/60
	I1212 21:02:12.149703   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 17/60
	I1212 21:02:13.151402   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 18/60
	I1212 21:02:14.153803   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 19/60
	I1212 21:02:15.155863   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 20/60
	I1212 21:02:16.158090   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 21/60
	I1212 21:02:17.160221   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 22/60
	I1212 21:02:18.162131   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 23/60
	I1212 21:02:19.163583   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 24/60
	I1212 21:02:20.165485   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 25/60
	I1212 21:02:21.166922   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 26/60
	I1212 21:02:22.168269   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 27/60
	I1212 21:02:23.169796   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 28/60
	I1212 21:02:24.171125   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 29/60
	I1212 21:02:25.173377   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 30/60
	I1212 21:02:26.174693   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 31/60
	I1212 21:02:27.175900   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 32/60
	I1212 21:02:28.177162   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 33/60
	I1212 21:02:29.178887   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 34/60
	I1212 21:02:30.180426   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 35/60
	I1212 21:02:31.181848   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 36/60
	I1212 21:02:32.183124   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 37/60
	I1212 21:02:33.184429   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 38/60
	I1212 21:02:34.185897   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 39/60
	I1212 21:02:35.188173   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 40/60
	I1212 21:02:36.189542   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 41/60
	I1212 21:02:37.190738   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 42/60
	I1212 21:02:38.192201   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 43/60
	I1212 21:02:39.193790   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 44/60
	I1212 21:02:40.195985   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 45/60
	I1212 21:02:41.197761   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 46/60
	I1212 21:02:42.199053   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 47/60
	I1212 21:02:43.200341   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 48/60
	I1212 21:02:44.201779   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 49/60
	I1212 21:02:45.203796   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 50/60
	I1212 21:02:46.205698   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 51/60
	I1212 21:02:47.207055   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 52/60
	I1212 21:02:48.208426   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 53/60
	I1212 21:02:49.209774   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 54/60
	I1212 21:02:50.211698   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 55/60
	I1212 21:02:51.213055   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 56/60
	I1212 21:02:52.214321   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 57/60
	I1212 21:02:53.215761   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 58/60
	I1212 21:02:54.217505   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 59/60
	I1212 21:02:55.218521   59607 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:02:55.218580   59607 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:02:55.218597   59607 retry.go:31] will retry after 1.452980764s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:02:56.672427   59607 stop.go:39] StopHost: no-preload-343495
	I1212 21:02:56.672831   59607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:02:56.672885   59607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:02:56.686856   59607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
	I1212 21:02:56.687342   59607 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:02:56.687774   59607 main.go:141] libmachine: Using API Version  1
	I1212 21:02:56.687799   59607 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:02:56.688152   59607 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:02:56.690453   59607 out.go:177] * Stopping node "no-preload-343495"  ...
	I1212 21:02:56.692243   59607 main.go:141] libmachine: Stopping "no-preload-343495"...
	I1212 21:02:56.692265   59607 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:02:56.694041   59607 main.go:141] libmachine: (no-preload-343495) Calling .Stop
	I1212 21:02:56.697459   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 0/60
	I1212 21:02:57.699110   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 1/60
	I1212 21:02:58.700964   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 2/60
	I1212 21:02:59.702396   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 3/60
	I1212 21:03:00.703974   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 4/60
	I1212 21:03:01.706047   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 5/60
	I1212 21:03:02.707372   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 6/60
	I1212 21:03:03.708724   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 7/60
	I1212 21:03:04.709917   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 8/60
	I1212 21:03:05.711222   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 9/60
	I1212 21:03:06.712970   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 10/60
	I1212 21:03:07.714302   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 11/60
	I1212 21:03:08.715858   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 12/60
	I1212 21:03:09.717273   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 13/60
	I1212 21:03:10.718925   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 14/60
	I1212 21:03:11.721002   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 15/60
	I1212 21:03:12.722489   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 16/60
	I1212 21:03:13.724703   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 17/60
	I1212 21:03:14.726494   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 18/60
	I1212 21:03:15.727915   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 19/60
	I1212 21:03:16.729967   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 20/60
	I1212 21:03:17.731422   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 21/60
	I1212 21:03:18.733159   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 22/60
	I1212 21:03:19.734570   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 23/60
	I1212 21:03:20.736166   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 24/60
	I1212 21:03:21.738456   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 25/60
	I1212 21:03:22.740395   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 26/60
	I1212 21:03:23.741867   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 27/60
	I1212 21:03:24.743191   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 28/60
	I1212 21:03:25.744717   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 29/60
	I1212 21:03:26.746952   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 30/60
	I1212 21:03:27.748652   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 31/60
	I1212 21:03:28.750198   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 32/60
	I1212 21:03:29.751758   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 33/60
	I1212 21:03:30.753009   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 34/60
	I1212 21:03:31.754675   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 35/60
	I1212 21:03:32.756168   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 36/60
	I1212 21:03:33.757577   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 37/60
	I1212 21:03:34.758841   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 38/60
	I1212 21:03:35.760220   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 39/60
	I1212 21:03:36.762065   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 40/60
	I1212 21:03:37.763535   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 41/60
	I1212 21:03:38.764978   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 42/60
	I1212 21:03:39.766134   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 43/60
	I1212 21:03:40.767478   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 44/60
	I1212 21:03:41.769407   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 45/60
	I1212 21:03:42.770596   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 46/60
	I1212 21:03:43.772022   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 47/60
	I1212 21:03:44.773415   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 48/60
	I1212 21:03:45.774947   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 49/60
	I1212 21:03:46.776695   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 50/60
	I1212 21:03:47.778012   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 51/60
	I1212 21:03:48.779501   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 52/60
	I1212 21:03:49.780762   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 53/60
	I1212 21:03:50.782230   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 54/60
	I1212 21:03:51.784071   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 55/60
	I1212 21:03:52.785398   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 56/60
	I1212 21:03:53.786701   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 57/60
	I1212 21:03:54.788080   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 58/60
	I1212 21:03:55.789734   59607 main.go:141] libmachine: (no-preload-343495) Waiting for machine to stop 59/60
	I1212 21:03:56.790641   59607 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:03:56.790681   59607 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:03:56.792733   59607 out.go:177] 
	W1212 21:03:56.794226   59607 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 21:03:56.794238   59607 out.go:239] * 
	* 
	W1212 21:03:56.797358   59607 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:03:56.798771   59607 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-343495 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495
E1212 21:04:06.283707   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495: exit status 3 (18.659339075s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:15.459611   60394 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	E1212 21:04:15.459632   60394 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-343495" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-831188 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-831188 --alsologtostderr -v=3: exit status 82 (2m1.405968219s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-831188"  ...
	* Stopping node "embed-certs-831188"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:02:14.909748   59800 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:02:14.910239   59800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:02:14.910255   59800 out.go:309] Setting ErrFile to fd 2...
	I1212 21:02:14.910264   59800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:02:14.910620   59800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:02:14.910964   59800 out.go:303] Setting JSON to false
	I1212 21:02:14.911062   59800 mustload.go:65] Loading cluster: embed-certs-831188
	I1212 21:02:14.911572   59800 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:02:14.911678   59800 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/config.json ...
	I1212 21:02:14.911877   59800 mustload.go:65] Loading cluster: embed-certs-831188
	I1212 21:02:14.912026   59800 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:02:14.912059   59800 stop.go:39] StopHost: embed-certs-831188
	I1212 21:02:14.912678   59800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:02:14.912730   59800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:02:14.927959   59800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I1212 21:02:14.928452   59800 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:02:14.929045   59800 main.go:141] libmachine: Using API Version  1
	I1212 21:02:14.929066   59800 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:02:14.929408   59800 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:02:14.931495   59800 out.go:177] * Stopping node "embed-certs-831188"  ...
	I1212 21:02:14.933114   59800 main.go:141] libmachine: Stopping "embed-certs-831188"...
	I1212 21:02:14.933146   59800 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:02:14.935038   59800 main.go:141] libmachine: (embed-certs-831188) Calling .Stop
	I1212 21:02:14.939040   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 0/60
	I1212 21:02:15.941485   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 1/60
	I1212 21:02:16.943100   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 2/60
	I1212 21:02:17.944853   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 3/60
	I1212 21:02:18.946157   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 4/60
	I1212 21:02:19.948099   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 5/60
	I1212 21:02:20.949606   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 6/60
	I1212 21:02:21.951030   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 7/60
	I1212 21:02:22.953157   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 8/60
	I1212 21:02:23.954772   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 9/60
	I1212 21:02:24.956360   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 10/60
	I1212 21:02:25.957840   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 11/60
	I1212 21:02:26.959071   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 12/60
	I1212 21:02:27.960952   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 13/60
	I1212 21:02:28.962100   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 14/60
	I1212 21:02:29.963966   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 15/60
	I1212 21:02:30.965400   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 16/60
	I1212 21:02:31.966831   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 17/60
	I1212 21:02:32.968162   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 18/60
	I1212 21:02:33.969507   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 19/60
	I1212 21:02:34.971761   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 20/60
	I1212 21:02:35.973091   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 21/60
	I1212 21:02:36.974297   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 22/60
	I1212 21:02:37.975989   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 23/60
	I1212 21:02:38.977450   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 24/60
	I1212 21:02:39.979826   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 25/60
	I1212 21:02:40.981421   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 26/60
	I1212 21:02:41.982727   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 27/60
	I1212 21:02:42.984076   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 28/60
	I1212 21:02:43.985664   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 29/60
	I1212 21:02:44.987746   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 30/60
	I1212 21:02:45.989074   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 31/60
	I1212 21:02:46.990452   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 32/60
	I1212 21:02:47.991838   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 33/60
	I1212 21:02:48.993087   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 34/60
	I1212 21:02:49.994977   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 35/60
	I1212 21:02:50.996481   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 36/60
	I1212 21:02:51.997910   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 37/60
	I1212 21:02:52.999441   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 38/60
	I1212 21:02:54.000830   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 39/60
	I1212 21:02:55.003147   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 40/60
	I1212 21:02:56.004540   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 41/60
	I1212 21:02:57.005905   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 42/60
	I1212 21:02:58.007436   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 43/60
	I1212 21:02:59.008750   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 44/60
	I1212 21:03:00.010652   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 45/60
	I1212 21:03:01.012090   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 46/60
	I1212 21:03:02.013429   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 47/60
	I1212 21:03:03.014685   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 48/60
	I1212 21:03:04.016135   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 49/60
	I1212 21:03:05.018381   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 50/60
	I1212 21:03:06.019654   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 51/60
	I1212 21:03:07.022135   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 52/60
	I1212 21:03:08.023603   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 53/60
	I1212 21:03:09.024885   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 54/60
	I1212 21:03:10.026840   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 55/60
	I1212 21:03:11.028223   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 56/60
	I1212 21:03:12.029661   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 57/60
	I1212 21:03:13.031106   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 58/60
	I1212 21:03:14.032491   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 59/60
	I1212 21:03:15.033988   59800 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:03:15.034051   59800 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:03:15.034075   59800 retry.go:31] will retry after 1.098385801s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:03:16.133316   59800 stop.go:39] StopHost: embed-certs-831188
	I1212 21:03:16.133939   59800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:03:16.133999   59800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:03:16.148335   59800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1212 21:03:16.148841   59800 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:03:16.149368   59800 main.go:141] libmachine: Using API Version  1
	I1212 21:03:16.149394   59800 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:03:16.149696   59800 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:03:16.153108   59800 out.go:177] * Stopping node "embed-certs-831188"  ...
	I1212 21:03:16.154604   59800 main.go:141] libmachine: Stopping "embed-certs-831188"...
	I1212 21:03:16.154622   59800 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:03:16.156248   59800 main.go:141] libmachine: (embed-certs-831188) Calling .Stop
	I1212 21:03:16.159799   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 0/60
	I1212 21:03:17.161114   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 1/60
	I1212 21:03:18.162786   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 2/60
	I1212 21:03:19.164127   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 3/60
	I1212 21:03:20.165522   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 4/60
	I1212 21:03:21.167432   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 5/60
	I1212 21:03:22.170114   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 6/60
	I1212 21:03:23.171543   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 7/60
	I1212 21:03:24.173887   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 8/60
	I1212 21:03:25.175199   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 9/60
	I1212 21:03:26.177237   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 10/60
	I1212 21:03:27.179296   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 11/60
	I1212 21:03:28.180565   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 12/60
	I1212 21:03:29.182070   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 13/60
	I1212 21:03:30.183209   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 14/60
	I1212 21:03:31.184784   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 15/60
	I1212 21:03:32.186038   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 16/60
	I1212 21:03:33.187272   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 17/60
	I1212 21:03:34.188499   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 18/60
	I1212 21:03:35.189686   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 19/60
	I1212 21:03:36.191415   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 20/60
	I1212 21:03:37.193013   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 21/60
	I1212 21:03:38.194494   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 22/60
	I1212 21:03:39.196043   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 23/60
	I1212 21:03:40.197482   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 24/60
	I1212 21:03:41.198982   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 25/60
	I1212 21:03:42.200326   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 26/60
	I1212 21:03:43.201584   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 27/60
	I1212 21:03:44.202860   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 28/60
	I1212 21:03:45.204763   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 29/60
	I1212 21:03:46.206615   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 30/60
	I1212 21:03:47.208036   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 31/60
	I1212 21:03:48.209422   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 32/60
	I1212 21:03:49.210773   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 33/60
	I1212 21:03:50.212094   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 34/60
	I1212 21:03:51.213777   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 35/60
	I1212 21:03:52.215341   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 36/60
	I1212 21:03:53.216598   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 37/60
	I1212 21:03:54.217881   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 38/60
	I1212 21:03:55.219278   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 39/60
	I1212 21:03:56.220607   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 40/60
	I1212 21:03:57.222136   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 41/60
	I1212 21:03:58.223569   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 42/60
	I1212 21:03:59.225032   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 43/60
	I1212 21:04:00.226328   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 44/60
	I1212 21:04:01.228224   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 45/60
	I1212 21:04:02.229548   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 46/60
	I1212 21:04:03.230903   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 47/60
	I1212 21:04:04.232252   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 48/60
	I1212 21:04:05.233558   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 49/60
	I1212 21:04:06.235413   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 50/60
	I1212 21:04:07.237081   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 51/60
	I1212 21:04:08.238368   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 52/60
	I1212 21:04:09.239823   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 53/60
	I1212 21:04:10.241045   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 54/60
	I1212 21:04:11.242713   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 55/60
	I1212 21:04:12.244069   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 56/60
	I1212 21:04:13.245328   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 57/60
	I1212 21:04:14.246794   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 58/60
	I1212 21:04:15.248109   59800 main.go:141] libmachine: (embed-certs-831188) Waiting for machine to stop 59/60
	I1212 21:04:16.249087   59800 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:04:16.249128   59800 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:04:16.250998   59800 out.go:177] 
	W1212 21:04:16.252311   59800 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 21:04:16.252327   59800 out.go:239] * 
	* 
	W1212 21:04:16.255446   59800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:04:16.256927   59800 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-831188 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188: exit status 3 (18.656576201s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:34.915571   60500 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host
	E1212 21:04:34.915592   60500 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-831188" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-372099 --alsologtostderr -v=3
E1212 21:02:25.370555   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:26.659709   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:02:27.931012   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:33.051845   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:43.292687   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:03:03.773082   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:03:07.620044   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:03:12.358244   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.363537   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.373806   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.394137   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.434525   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.514894   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.675322   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:12.996370   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:13.636998   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:14.917981   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:17.478716   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-372099 --alsologtostderr -v=3: exit status 82 (2m1.205406191s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-372099"  ...
	* Stopping node "old-k8s-version-372099"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:02:24.986584   59937 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:02:24.986725   59937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:02:24.986749   59937 out.go:309] Setting ErrFile to fd 2...
	I1212 21:02:24.986754   59937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:02:24.986925   59937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:02:24.987170   59937 out.go:303] Setting JSON to false
	I1212 21:02:24.987278   59937 mustload.go:65] Loading cluster: old-k8s-version-372099
	I1212 21:02:24.987646   59937 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:02:24.987716   59937 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/config.json ...
	I1212 21:02:24.987874   59937 mustload.go:65] Loading cluster: old-k8s-version-372099
	I1212 21:02:24.987975   59937 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:02:24.987997   59937 stop.go:39] StopHost: old-k8s-version-372099
	I1212 21:02:24.988380   59937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:02:24.988430   59937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:02:25.003682   59937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I1212 21:02:25.004227   59937 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:02:25.004882   59937 main.go:141] libmachine: Using API Version  1
	I1212 21:02:25.004908   59937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:02:25.005301   59937 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:02:25.007798   59937 out.go:177] * Stopping node "old-k8s-version-372099"  ...
	I1212 21:02:25.009628   59937 main.go:141] libmachine: Stopping "old-k8s-version-372099"...
	I1212 21:02:25.009657   59937 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:02:25.011845   59937 main.go:141] libmachine: (old-k8s-version-372099) Calling .Stop
	I1212 21:02:25.015650   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 0/60
	I1212 21:02:26.017110   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 1/60
	I1212 21:02:27.018423   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 2/60
	I1212 21:02:28.019752   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 3/60
	I1212 21:02:29.021023   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 4/60
	I1212 21:02:30.022998   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 5/60
	I1212 21:02:31.025147   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 6/60
	I1212 21:02:32.027032   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 7/60
	I1212 21:02:33.028471   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 8/60
	I1212 21:02:34.029641   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 9/60
	I1212 21:02:35.031865   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 10/60
	I1212 21:02:36.033295   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 11/60
	I1212 21:02:37.034495   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 12/60
	I1212 21:02:38.036471   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 13/60
	I1212 21:02:39.037854   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 14/60
	I1212 21:02:40.039737   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 15/60
	I1212 21:02:41.041082   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 16/60
	I1212 21:02:42.042560   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 17/60
	I1212 21:02:43.043900   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 18/60
	I1212 21:02:44.045804   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 19/60
	I1212 21:02:45.047660   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 20/60
	I1212 21:02:46.048962   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 21/60
	I1212 21:02:47.050287   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 22/60
	I1212 21:02:48.051706   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 23/60
	I1212 21:02:49.053149   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 24/60
	I1212 21:02:50.055214   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 25/60
	I1212 21:02:51.056728   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 26/60
	I1212 21:02:52.058161   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 27/60
	I1212 21:02:53.059481   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 28/60
	I1212 21:02:54.061781   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 29/60
	I1212 21:02:55.063703   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 30/60
	I1212 21:02:56.064935   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 31/60
	I1212 21:02:57.066239   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 32/60
	I1212 21:02:58.067651   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 33/60
	I1212 21:02:59.069112   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 34/60
	I1212 21:03:00.071046   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 35/60
	I1212 21:03:01.072524   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 36/60
	I1212 21:03:02.073789   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 37/60
	I1212 21:03:03.075255   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 38/60
	I1212 21:03:04.076593   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 39/60
	I1212 21:03:05.078716   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 40/60
	I1212 21:03:06.080033   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 41/60
	I1212 21:03:07.081305   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 42/60
	I1212 21:03:08.082755   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 43/60
	I1212 21:03:09.084112   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 44/60
	I1212 21:03:10.085938   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 45/60
	I1212 21:03:11.087379   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 46/60
	I1212 21:03:12.088688   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 47/60
	I1212 21:03:13.090231   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 48/60
	I1212 21:03:14.091730   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 49/60
	I1212 21:03:15.093739   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 50/60
	I1212 21:03:16.095068   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 51/60
	I1212 21:03:17.096482   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 52/60
	I1212 21:03:18.098190   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 53/60
	I1212 21:03:19.099586   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 54/60
	I1212 21:03:20.101291   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 55/60
	I1212 21:03:21.102887   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 56/60
	I1212 21:03:22.104425   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 57/60
	I1212 21:03:23.105585   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 58/60
	I1212 21:03:24.107063   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 59/60
	I1212 21:03:25.108394   59937 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:03:25.108469   59937 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:03:25.108495   59937 retry.go:31] will retry after 896.333202ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:03:26.005532   59937 stop.go:39] StopHost: old-k8s-version-372099
	I1212 21:03:26.005914   59937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:03:26.005962   59937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:03:26.020292   59937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1212 21:03:26.020729   59937 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:03:26.021203   59937 main.go:141] libmachine: Using API Version  1
	I1212 21:03:26.021226   59937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:03:26.021578   59937 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:03:26.023731   59937 out.go:177] * Stopping node "old-k8s-version-372099"  ...
	I1212 21:03:26.025222   59937 main.go:141] libmachine: Stopping "old-k8s-version-372099"...
	I1212 21:03:26.025248   59937 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:03:26.027038   59937 main.go:141] libmachine: (old-k8s-version-372099) Calling .Stop
	I1212 21:03:26.031205   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 0/60
	I1212 21:03:27.032588   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 1/60
	I1212 21:03:28.034003   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 2/60
	I1212 21:03:29.035307   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 3/60
	I1212 21:03:30.036572   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 4/60
	I1212 21:03:31.038208   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 5/60
	I1212 21:03:32.039645   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 6/60
	I1212 21:03:33.040959   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 7/60
	I1212 21:03:34.042570   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 8/60
	I1212 21:03:35.043828   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 9/60
	I1212 21:03:36.045812   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 10/60
	I1212 21:03:37.047280   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 11/60
	I1212 21:03:38.048540   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 12/60
	I1212 21:03:39.049879   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 13/60
	I1212 21:03:40.051631   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 14/60
	I1212 21:03:41.053418   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 15/60
	I1212 21:03:42.054710   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 16/60
	I1212 21:03:43.056068   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 17/60
	I1212 21:03:44.057409   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 18/60
	I1212 21:03:45.058768   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 19/60
	I1212 21:03:46.060671   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 20/60
	I1212 21:03:47.062078   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 21/60
	I1212 21:03:48.063471   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 22/60
	I1212 21:03:49.065103   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 23/60
	I1212 21:03:50.066646   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 24/60
	I1212 21:03:51.068459   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 25/60
	I1212 21:03:52.069971   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 26/60
	I1212 21:03:53.071286   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 27/60
	I1212 21:03:54.072543   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 28/60
	I1212 21:03:55.074003   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 29/60
	I1212 21:03:56.075929   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 30/60
	I1212 21:03:57.077314   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 31/60
	I1212 21:03:58.078704   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 32/60
	I1212 21:03:59.080110   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 33/60
	I1212 21:04:00.081449   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 34/60
	I1212 21:04:01.083411   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 35/60
	I1212 21:04:02.084757   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 36/60
	I1212 21:04:03.086286   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 37/60
	I1212 21:04:04.087651   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 38/60
	I1212 21:04:05.089173   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 39/60
	I1212 21:04:06.091168   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 40/60
	I1212 21:04:07.092506   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 41/60
	I1212 21:04:08.094013   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 42/60
	I1212 21:04:09.095529   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 43/60
	I1212 21:04:10.096992   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 44/60
	I1212 21:04:11.098874   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 45/60
	I1212 21:04:12.100277   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 46/60
	I1212 21:04:13.101715   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 47/60
	I1212 21:04:14.103191   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 48/60
	I1212 21:04:15.104729   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 49/60
	I1212 21:04:16.106241   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 50/60
	I1212 21:04:17.107818   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 51/60
	I1212 21:04:18.109679   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 52/60
	I1212 21:04:19.111037   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 53/60
	I1212 21:04:20.112589   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 54/60
	I1212 21:04:21.114426   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 55/60
	I1212 21:04:22.115996   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 56/60
	I1212 21:04:23.117596   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 57/60
	I1212 21:04:24.119068   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 58/60
	I1212 21:04:25.120378   59937 main.go:141] libmachine: (old-k8s-version-372099) Waiting for machine to stop 59/60
	I1212 21:04:26.121377   59937 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:04:26.121419   59937 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:04:26.123477   59937 out.go:177] 
	W1212 21:04:26.125020   59937 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 21:04:26.125034   59937 out.go:239] * 
	* 
	W1212 21:04:26.127935   59937 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:04:26.129444   59937 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-372099 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099
E1212 21:04:26.764411   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099: exit status 3 (18.511737402s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:44.643655   60599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E1212 21:04:44.643683   60599 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-372099" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-171828 --alsologtostderr -v=3
E1212 21:03:32.839780   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:39.480949   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 21:03:44.733658   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:03:45.801690   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:45.806990   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:45.817329   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:45.838233   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:45.878547   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:45.958959   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:46.119962   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:46.440335   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:47.081401   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:48.361853   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:50.922738   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:53.320959   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:03:56.043555   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:03:56.433191   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-171828 --alsologtostderr -v=3: exit status 82 (2m1.572230732s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-171828"  ...
	* Stopping node "default-k8s-diff-port-171828"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:03:29.479378   60285 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:03:29.479696   60285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:03:29.479709   60285 out.go:309] Setting ErrFile to fd 2...
	I1212 21:03:29.479716   60285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:03:29.479902   60285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:03:29.480154   60285 out.go:303] Setting JSON to false
	I1212 21:03:29.480231   60285 mustload.go:65] Loading cluster: default-k8s-diff-port-171828
	I1212 21:03:29.480569   60285 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:03:29.480643   60285 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:03:29.480799   60285 mustload.go:65] Loading cluster: default-k8s-diff-port-171828
	I1212 21:03:29.480898   60285 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:03:29.480927   60285 stop.go:39] StopHost: default-k8s-diff-port-171828
	I1212 21:03:29.481329   60285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:03:29.481378   60285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:03:29.495744   60285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I1212 21:03:29.496173   60285 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:03:29.496766   60285 main.go:141] libmachine: Using API Version  1
	I1212 21:03:29.496795   60285 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:03:29.497137   60285 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:03:29.499356   60285 out.go:177] * Stopping node "default-k8s-diff-port-171828"  ...
	I1212 21:03:29.500811   60285 main.go:141] libmachine: Stopping "default-k8s-diff-port-171828"...
	I1212 21:03:29.500840   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:03:29.502366   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Stop
	I1212 21:03:29.505350   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 0/60
	I1212 21:03:30.506960   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 1/60
	I1212 21:03:31.508204   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 2/60
	I1212 21:03:32.509708   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 3/60
	I1212 21:03:33.511319   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 4/60
	I1212 21:03:34.513409   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 5/60
	I1212 21:03:35.514831   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 6/60
	I1212 21:03:36.516332   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 7/60
	I1212 21:03:37.517696   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 8/60
	I1212 21:03:38.519005   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 9/60
	I1212 21:03:39.520853   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 10/60
	I1212 21:03:40.522178   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 11/60
	I1212 21:03:41.523485   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 12/60
	I1212 21:03:42.524831   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 13/60
	I1212 21:03:43.526050   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 14/60
	I1212 21:03:44.528030   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 15/60
	I1212 21:03:45.530147   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 16/60
	I1212 21:03:46.531443   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 17/60
	I1212 21:03:47.532943   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 18/60
	I1212 21:03:48.534295   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 19/60
	I1212 21:03:49.536639   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 20/60
	I1212 21:03:50.537927   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 21/60
	I1212 21:03:51.539304   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 22/60
	I1212 21:03:52.540829   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 23/60
	I1212 21:03:53.542100   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 24/60
	I1212 21:03:54.544270   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 25/60
	I1212 21:03:55.545798   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 26/60
	I1212 21:03:56.547424   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 27/60
	I1212 21:03:57.548701   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 28/60
	I1212 21:03:58.550055   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 29/60
	I1212 21:03:59.552291   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 30/60
	I1212 21:04:00.553770   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 31/60
	I1212 21:04:01.555195   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 32/60
	I1212 21:04:02.556644   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 33/60
	I1212 21:04:03.558117   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 34/60
	I1212 21:04:04.560418   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 35/60
	I1212 21:04:05.561836   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 36/60
	I1212 21:04:06.563282   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 37/60
	I1212 21:04:07.565003   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 38/60
	I1212 21:04:08.566574   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 39/60
	I1212 21:04:09.569007   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 40/60
	I1212 21:04:10.570467   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 41/60
	I1212 21:04:11.571892   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 42/60
	I1212 21:04:12.573836   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 43/60
	I1212 21:04:13.575044   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 44/60
	I1212 21:04:14.577087   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 45/60
	I1212 21:04:15.578687   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 46/60
	I1212 21:04:16.580431   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 47/60
	I1212 21:04:17.581862   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 48/60
	I1212 21:04:18.583342   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 49/60
	I1212 21:04:19.585551   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 50/60
	I1212 21:04:20.586925   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 51/60
	I1212 21:04:21.588413   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 52/60
	I1212 21:04:22.589760   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 53/60
	I1212 21:04:23.591172   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 54/60
	I1212 21:04:24.593378   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 55/60
	I1212 21:04:25.594564   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 56/60
	I1212 21:04:26.595894   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 57/60
	I1212 21:04:27.597224   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 58/60
	I1212 21:04:28.598644   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 59/60
	I1212 21:04:29.599170   60285 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:04:29.599219   60285 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:04:29.599236   60285 retry.go:31] will retry after 1.266829458s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:04:30.866672   60285 stop.go:39] StopHost: default-k8s-diff-port-171828
	I1212 21:04:30.867104   60285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:04:30.867150   60285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:04:30.881604   60285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I1212 21:04:30.882022   60285 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:04:30.882485   60285 main.go:141] libmachine: Using API Version  1
	I1212 21:04:30.882507   60285 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:04:30.882846   60285 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:04:30.885137   60285 out.go:177] * Stopping node "default-k8s-diff-port-171828"  ...
	I1212 21:04:30.886703   60285 main.go:141] libmachine: Stopping "default-k8s-diff-port-171828"...
	I1212 21:04:30.886722   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:04:30.888464   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Stop
	I1212 21:04:30.891611   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 0/60
	I1212 21:04:31.893151   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 1/60
	I1212 21:04:32.894489   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 2/60
	I1212 21:04:33.895897   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 3/60
	I1212 21:04:34.897399   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 4/60
	I1212 21:04:35.899440   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 5/60
	I1212 21:04:36.901501   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 6/60
	I1212 21:04:37.902882   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 7/60
	I1212 21:04:38.904124   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 8/60
	I1212 21:04:39.905380   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 9/60
	I1212 21:04:40.907303   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 10/60
	I1212 21:04:41.908774   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 11/60
	I1212 21:04:42.910036   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 12/60
	I1212 21:04:43.911498   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 13/60
	I1212 21:04:44.912756   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 14/60
	I1212 21:04:45.914657   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 15/60
	I1212 21:04:46.915989   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 16/60
	I1212 21:04:47.917788   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 17/60
	I1212 21:04:48.919112   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 18/60
	I1212 21:04:49.920660   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 19/60
	I1212 21:04:50.922236   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 20/60
	I1212 21:04:51.923736   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 21/60
	I1212 21:04:52.925044   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 22/60
	I1212 21:04:53.926450   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 23/60
	I1212 21:04:54.927921   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 24/60
	I1212 21:04:55.929761   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 25/60
	I1212 21:04:56.931286   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 26/60
	I1212 21:04:57.932631   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 27/60
	I1212 21:04:58.934134   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 28/60
	I1212 21:04:59.935472   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 29/60
	I1212 21:05:00.937373   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 30/60
	I1212 21:05:01.938621   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 31/60
	I1212 21:05:02.940278   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 32/60
	I1212 21:05:03.941612   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 33/60
	I1212 21:05:04.943356   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 34/60
	I1212 21:05:05.945142   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 35/60
	I1212 21:05:06.946779   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 36/60
	I1212 21:05:07.948035   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 37/60
	I1212 21:05:08.949509   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 38/60
	I1212 21:05:09.951086   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 39/60
	I1212 21:05:10.953147   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 40/60
	I1212 21:05:11.954552   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 41/60
	I1212 21:05:12.956055   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 42/60
	I1212 21:05:13.957517   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 43/60
	I1212 21:05:14.958739   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 44/60
	I1212 21:05:15.960469   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 45/60
	I1212 21:05:16.962093   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 46/60
	I1212 21:05:17.963939   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 47/60
	I1212 21:05:18.965596   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 48/60
	I1212 21:05:19.967104   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 49/60
	I1212 21:05:20.969071   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 50/60
	I1212 21:05:21.970306   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 51/60
	I1212 21:05:22.971882   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 52/60
	I1212 21:05:23.973250   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 53/60
	I1212 21:05:24.974703   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 54/60
	I1212 21:05:25.976531   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 55/60
	I1212 21:05:26.978390   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 56/60
	I1212 21:05:27.980010   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 57/60
	I1212 21:05:28.981795   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 58/60
	I1212 21:05:29.983310   60285 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for machine to stop 59/60
	I1212 21:05:30.984241   60285 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 21:05:30.984292   60285 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 21:05:30.986474   60285 out.go:177] 
	W1212 21:05:30.988016   60285 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 21:05:30.988028   60285 out.go:239] * 
	* 
	W1212 21:05:30.991161   60285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:05:30.992690   60285 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-171828 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
E1212 21:05:40.620280   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828: exit status 3 (18.672265713s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:05:49.667540   61120 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host
	E1212 21:05:49.667560   61120 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495: exit status 3 (3.199529699s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:18.659610   60470 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	E1212 21:04:18.659633   60470 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-343495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-343495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153953437s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-343495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495: exit status 3 (3.062473231s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:27.875621   60569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	E1212 21:04:27.875645   60569 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-343495" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188: exit status 3 (3.16816146s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:38.083632   60704 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host
	E1212 21:04:38.083656   60704 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 21:04:39.385012   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:04:42.874514   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:42.879778   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:42.890053   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:42.911193   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:42.951494   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:43.031856   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:43.192729   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:43.513349   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:44.154017   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153704585s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188: exit status 3 (3.062123226s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:47.299646   60773 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host
	E1212 21:04:47.299674   60773 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.163:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-831188" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099
E1212 21:04:45.434518   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099: exit status 3 (3.167940698s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:47.811660   60803 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E1212 21:04:47.811695   60803 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-372099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 21:04:47.995150   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:04:53.115464   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-372099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153880394s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-372099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099: exit status 3 (3.065661088s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:04:57.031572   60906 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E1212 21:04:57.031593   60906 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-372099" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828: exit status 3 (3.167400993s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:05:52.835585   61198 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host
	E1212 21:05:52.835602   61198 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-171828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 21:05:56.201336   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-171828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15405641s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-171828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
E1212 21:06:01.101255   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828: exit status 3 (3.06205074s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:06:02.051666   61268 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host
	E1212 21:06:02.051687   61268 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.253:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:14:39.385076   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:14:42.874436   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831188 -n embed-certs-831188
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:23:17.890500107 +0000 UTC m=+5199.110672677
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-831188 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-831188 logs -n 25: (1.716310185s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-690675 sudo cat                              | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo find                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo crio                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-690675                                       | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:06:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:06:02.112042   61298 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:06:02.112158   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112166   61298 out.go:309] Setting ErrFile to fd 2...
	I1212 21:06:02.112171   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112352   61298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:06:02.112888   61298 out.go:303] Setting JSON to false
	I1212 21:06:02.113799   61298 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6516,"bootTime":1702408646,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:06:02.113858   61298 start.go:138] virtualization: kvm guest
	I1212 21:06:02.116152   61298 out.go:177] * [default-k8s-diff-port-171828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:06:02.118325   61298 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:06:02.118373   61298 notify.go:220] Checking for updates...
	I1212 21:06:02.120036   61298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:06:02.121697   61298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:06:02.123350   61298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:06:02.124958   61298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:06:02.126355   61298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:06:02.128221   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:06:02.128652   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.128709   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.143368   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I1212 21:06:02.143740   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.144319   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.144342   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.144674   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.144877   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.145143   61298 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:06:02.145473   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.145519   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.160165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1212 21:06:02.160611   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.161098   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.161129   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.161410   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.161605   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.198703   61298 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:06:02.199992   61298 start.go:298] selected driver: kvm2
	I1212 21:06:02.200011   61298 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.200131   61298 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:06:02.200848   61298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.200920   61298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:06:02.215947   61298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:06:02.216333   61298 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:02.216397   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:06:02.216410   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:06:02.216420   61298 start_flags.go:323] config:
	{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-17182
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.216597   61298 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.218773   61298 out.go:177] * Starting control plane node default-k8s-diff-port-171828 in cluster default-k8s-diff-port-171828
	I1212 21:05:59.427580   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:02.220182   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:06:02.220241   61298 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 21:06:02.220256   61298 cache.go:56] Caching tarball of preloaded images
	I1212 21:06:02.220379   61298 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:06:02.220393   61298 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 21:06:02.220514   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:06:02.220739   61298 start.go:365] acquiring machines lock for default-k8s-diff-port-171828: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:06:05.507538   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:08.579605   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:14.659535   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:17.731542   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:23.811575   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:26.883541   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:32.963600   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:36.035521   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:42.115475   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:45.187562   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:51.267528   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:54.339532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:00.419548   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:03.491553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:09.571514   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:12.643531   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:18.723534   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:21.795549   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:27.875554   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:30.947574   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:37.027523   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:40.099490   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:46.179518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:49.251577   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:55.331532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:58.403520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:04.483547   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:07.555546   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:13.635553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:16.707518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:22.787551   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:25.859539   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:31.939511   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:35.011564   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:41.091518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:44.163443   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:50.243526   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:53.315520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:59.395550   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:02.467533   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:05.471384   60833 start.go:369] acquired machines lock for "embed-certs-831188" in 4m18.011296189s
	I1212 21:09:05.471446   60833 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:05.471453   60833 fix.go:54] fixHost starting: 
	I1212 21:09:05.471803   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:05.471837   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:05.486451   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1212 21:09:05.486900   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:05.487381   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:05.487404   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:05.487715   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:05.487879   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:05.488020   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:05.489670   60833 fix.go:102] recreateIfNeeded on embed-certs-831188: state=Stopped err=<nil>
	I1212 21:09:05.489704   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	W1212 21:09:05.489876   60833 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:05.492059   60833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831188" ...
	I1212 21:09:05.493752   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Start
	I1212 21:09:05.493959   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring networks are active...
	I1212 21:09:05.494984   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network default is active
	I1212 21:09:05.495423   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network mk-embed-certs-831188 is active
	I1212 21:09:05.495761   60833 main.go:141] libmachine: (embed-certs-831188) Getting domain xml...
	I1212 21:09:05.496421   60833 main.go:141] libmachine: (embed-certs-831188) Creating domain...
	I1212 21:09:06.732388   60833 main.go:141] libmachine: (embed-certs-831188) Waiting to get IP...
	I1212 21:09:06.733338   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:06.733708   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:06.733785   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:06.733676   61768 retry.go:31] will retry after 284.906493ms: waiting for machine to come up
	I1212 21:09:07.020284   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.020718   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.020745   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.020671   61768 retry.go:31] will retry after 293.274895ms: waiting for machine to come up
	I1212 21:09:07.315313   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.315686   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.315712   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.315641   61768 retry.go:31] will retry after 361.328832ms: waiting for machine to come up
	I1212 21:09:05.469256   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:05.469293   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:09:05.471233   60628 machine.go:91] provisioned docker machine in 4m37.408714984s
	I1212 21:09:05.471294   60628 fix.go:56] fixHost completed within 4m37.431179626s
	I1212 21:09:05.471299   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 4m37.431203273s
	W1212 21:09:05.471318   60628 start.go:694] error starting host: provision: host is not running
	W1212 21:09:05.471416   60628 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 21:09:05.471424   60628 start.go:709] Will try again in 5 seconds ...
	I1212 21:09:07.678255   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.678636   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.678700   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.678599   61768 retry.go:31] will retry after 604.479659ms: waiting for machine to come up
	I1212 21:09:08.284350   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:08.284754   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:08.284779   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:08.284701   61768 retry.go:31] will retry after 731.323448ms: waiting for machine to come up
	I1212 21:09:09.017564   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.018007   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.018040   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.017968   61768 retry.go:31] will retry after 734.083609ms: waiting for machine to come up
	I1212 21:09:09.753947   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.754423   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.754446   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.754362   61768 retry.go:31] will retry after 786.816799ms: waiting for machine to come up
	I1212 21:09:10.542771   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:10.543304   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:10.543341   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:10.543264   61768 retry.go:31] will retry after 1.40646031s: waiting for machine to come up
	I1212 21:09:11.951821   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:11.952180   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:11.952223   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:11.952135   61768 retry.go:31] will retry after 1.693488962s: waiting for machine to come up
	I1212 21:09:10.473087   60628 start.go:365] acquiring machines lock for no-preload-343495: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:09:13.646801   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:13.647256   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:13.647299   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:13.647180   61768 retry.go:31] will retry after 1.856056162s: waiting for machine to come up
	I1212 21:09:15.504815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:15.505228   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:15.505258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:15.505175   61768 retry.go:31] will retry after 2.008264333s: waiting for machine to come up
	I1212 21:09:17.516231   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:17.516653   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:17.516683   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:17.516604   61768 retry.go:31] will retry after 3.239343078s: waiting for machine to come up
	I1212 21:09:20.757258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:20.757696   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:20.757725   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:20.757654   61768 retry.go:31] will retry after 4.315081016s: waiting for machine to come up
	I1212 21:09:26.424166   60948 start.go:369] acquired machines lock for "old-k8s-version-372099" in 4m29.049387398s
	I1212 21:09:26.424241   60948 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:26.424254   60948 fix.go:54] fixHost starting: 
	I1212 21:09:26.424715   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:26.424763   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:26.444634   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I1212 21:09:26.445043   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:26.445520   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:09:26.445538   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:26.445863   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:26.446052   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:26.446192   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:09:26.447776   60948 fix.go:102] recreateIfNeeded on old-k8s-version-372099: state=Stopped err=<nil>
	I1212 21:09:26.447804   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	W1212 21:09:26.448015   60948 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:26.450126   60948 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-372099" ...
	I1212 21:09:26.451553   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Start
	I1212 21:09:26.451708   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring networks are active...
	I1212 21:09:26.452388   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network default is active
	I1212 21:09:26.452655   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network mk-old-k8s-version-372099 is active
	I1212 21:09:26.453124   60948 main.go:141] libmachine: (old-k8s-version-372099) Getting domain xml...
	I1212 21:09:26.453799   60948 main.go:141] libmachine: (old-k8s-version-372099) Creating domain...
	I1212 21:09:25.078112   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078553   60833 main.go:141] libmachine: (embed-certs-831188) Found IP for machine: 192.168.50.163
	I1212 21:09:25.078585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has current primary IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078596   60833 main.go:141] libmachine: (embed-certs-831188) Reserving static IP address...
	I1212 21:09:25.078997   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.079030   60833 main.go:141] libmachine: (embed-certs-831188) Reserved static IP address: 192.168.50.163
	I1212 21:09:25.079052   60833 main.go:141] libmachine: (embed-certs-831188) DBG | skip adding static IP to network mk-embed-certs-831188 - found existing host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"}
	I1212 21:09:25.079071   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Getting to WaitForSSH function...
	I1212 21:09:25.079085   60833 main.go:141] libmachine: (embed-certs-831188) Waiting for SSH to be available...
	I1212 21:09:25.080901   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081194   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.081242   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081366   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH client type: external
	I1212 21:09:25.081388   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa (-rw-------)
	I1212 21:09:25.081416   60833 main.go:141] libmachine: (embed-certs-831188) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:25.081426   60833 main.go:141] libmachine: (embed-certs-831188) DBG | About to run SSH command:
	I1212 21:09:25.081438   60833 main.go:141] libmachine: (embed-certs-831188) DBG | exit 0
	I1212 21:09:25.171277   60833 main.go:141] libmachine: (embed-certs-831188) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:25.171663   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetConfigRaw
	I1212 21:09:25.172345   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.174944   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175302   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.175333   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175553   60833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/config.json ...
	I1212 21:09:25.175828   60833 machine.go:88] provisioning docker machine ...
	I1212 21:09:25.175855   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:25.176065   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176212   60833 buildroot.go:166] provisioning hostname "embed-certs-831188"
	I1212 21:09:25.176233   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176371   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.178556   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178823   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.178850   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178957   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.179142   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179295   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179436   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.179558   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.179895   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.179910   60833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831188 && echo "embed-certs-831188" | sudo tee /etc/hostname
	I1212 21:09:25.312418   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831188
	
	I1212 21:09:25.312457   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.315156   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315529   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.315570   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315707   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.315895   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316053   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316211   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.316378   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.316840   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.316869   60833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831188' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831188/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831188' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:25.448302   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:25.448332   60833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:25.448353   60833 buildroot.go:174] setting up certificates
	I1212 21:09:25.448362   60833 provision.go:83] configureAuth start
	I1212 21:09:25.448369   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.448691   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.451262   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.451639   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451807   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.454144   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454434   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.454460   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454596   60833 provision.go:138] copyHostCerts
	I1212 21:09:25.454665   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:25.454689   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:25.454775   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:25.454928   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:25.454940   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:25.454984   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:25.455062   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:25.455073   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:25.455106   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:25.455171   60833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831188 san=[192.168.50.163 192.168.50.163 localhost 127.0.0.1 minikube embed-certs-831188]
	I1212 21:09:25.678855   60833 provision.go:172] copyRemoteCerts
	I1212 21:09:25.678942   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:25.678975   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.681866   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682221   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.682249   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682399   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.682590   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.682730   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.682856   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:25.773454   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:25.796334   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 21:09:25.818680   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:09:25.840234   60833 provision.go:86] duration metric: configureAuth took 391.845214ms
	I1212 21:09:25.840268   60833 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:25.840497   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:25.840643   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.842988   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843431   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.843482   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843586   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.843772   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.843946   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.844066   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.844227   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.844542   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.844563   60833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:26.167363   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:26.167388   60833 machine.go:91] provisioned docker machine in 991.541719ms
	I1212 21:09:26.167398   60833 start.go:300] post-start starting for "embed-certs-831188" (driver="kvm2")
	I1212 21:09:26.167408   60833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:26.167444   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.167739   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:26.167763   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.170188   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170569   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.170611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170712   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.170880   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.171049   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.171194   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.261249   60833 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:26.265429   60833 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:26.265451   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:26.265522   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:26.265602   60833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:26.265695   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:26.274054   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:26.297890   60833 start.go:303] post-start completed in 130.478946ms
	I1212 21:09:26.297915   60833 fix.go:56] fixHost completed within 20.826462284s
	I1212 21:09:26.297934   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.300585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.300934   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.300975   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.301144   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.301359   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301529   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301665   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.301797   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:26.302153   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:26.302164   60833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:26.423978   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415366.370228005
	
	I1212 21:09:26.424008   60833 fix.go:206] guest clock: 1702415366.370228005
	I1212 21:09:26.424019   60833 fix.go:219] Guest: 2023-12-12 21:09:26.370228005 +0000 UTC Remote: 2023-12-12 21:09:26.297918475 +0000 UTC m=+278.991313322 (delta=72.30953ms)
	I1212 21:09:26.424052   60833 fix.go:190] guest clock delta is within tolerance: 72.30953ms
	I1212 21:09:26.424061   60833 start.go:83] releasing machines lock for "embed-certs-831188", held for 20.952636536s
	I1212 21:09:26.424090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.424347   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:26.427068   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427479   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.427519   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427592   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428173   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428344   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428414   60833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:26.428470   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.428492   60833 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:26.428508   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.430943   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431251   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431371   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431393   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431548   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431631   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431654   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431776   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.431844   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431998   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.432040   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432285   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.432490   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.548980   60833 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:26.555211   60833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:26.707171   60833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:26.714564   60833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:26.714658   60833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:26.730858   60833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:26.730890   60833 start.go:475] detecting cgroup driver to use...
	I1212 21:09:26.730963   60833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:26.751316   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:26.766700   60833 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:26.766767   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:26.783157   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:26.799559   60833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:26.908659   60833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:27.029185   60833 docker.go:219] disabling docker service ...
	I1212 21:09:27.029245   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:27.042969   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:27.055477   60833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:27.174297   60833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:27.285338   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:27.299676   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:27.317832   60833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:09:27.317900   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.329270   60833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:27.329346   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.341201   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.353243   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.365796   60833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:27.377700   60833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:27.388796   60833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:27.388858   60833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:27.401983   60833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:27.411527   60833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:27.523326   60833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:27.702370   60833 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:27.702435   60833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:27.707537   60833 start.go:543] Will wait 60s for crictl version
	I1212 21:09:27.707619   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:09:27.711502   60833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:27.750808   60833 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:27.750912   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.799419   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.848900   60833 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:09:27.722142   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting to get IP...
	I1212 21:09:27.723300   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.723736   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.723806   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.723702   61894 retry.go:31] will retry after 267.755874ms: waiting for machine to come up
	I1212 21:09:27.993406   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.993917   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.993947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.993865   61894 retry.go:31] will retry after 314.872831ms: waiting for machine to come up
	I1212 21:09:28.310446   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.311022   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.311051   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.310971   61894 retry.go:31] will retry after 435.368111ms: waiting for machine to come up
	I1212 21:09:28.747774   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.748267   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.748299   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.748238   61894 retry.go:31] will retry after 521.305154ms: waiting for machine to come up
	I1212 21:09:29.270989   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.271519   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.271553   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.271446   61894 retry.go:31] will retry after 482.42376ms: waiting for machine to come up
	I1212 21:09:29.755222   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.755724   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.755755   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.755671   61894 retry.go:31] will retry after 676.918794ms: waiting for machine to come up
	I1212 21:09:30.434488   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:30.435072   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:30.435103   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:30.435025   61894 retry.go:31] will retry after 876.618903ms: waiting for machine to come up
	I1212 21:09:31.313270   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:31.313826   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:31.313857   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:31.313775   61894 retry.go:31] will retry after 1.03353638s: waiting for machine to come up
	I1212 21:09:27.850614   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:27.853633   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854033   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:27.854069   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854243   60833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:27.858626   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:27.871999   60833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:09:27.872058   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:27.920758   60833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:09:27.920832   60833 ssh_runner.go:195] Run: which lz4
	I1212 21:09:27.924857   60833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:27.929186   60833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:27.929220   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:09:29.834194   60833 crio.go:444] Took 1.909381 seconds to copy over tarball
	I1212 21:09:29.834285   60833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:32.348562   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:32.349019   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:32.349041   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:32.348978   61894 retry.go:31] will retry after 1.80085882s: waiting for machine to come up
	I1212 21:09:34.151943   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:34.152375   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:34.152416   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:34.152343   61894 retry.go:31] will retry after 2.08304575s: waiting for machine to come up
	I1212 21:09:36.238682   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:36.239115   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:36.239149   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:36.239074   61894 retry.go:31] will retry after 2.109809124s: waiting for machine to come up
	I1212 21:09:33.005355   60833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.171034001s)
	I1212 21:09:33.005386   60833 crio.go:451] Took 3.171167 seconds to extract the tarball
	I1212 21:09:33.005398   60833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:33.046773   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:33.101606   60833 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:09:33.101627   60833 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:09:33.101689   60833 ssh_runner.go:195] Run: crio config
	I1212 21:09:33.162553   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:33.162584   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:33.162608   60833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:33.162637   60833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831188 NodeName:embed-certs-831188 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:09:33.162806   60833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831188"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:33.162923   60833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-831188 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:33.162978   60833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:09:33.171937   60833 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:33.172013   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:33.180480   60833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 21:09:33.197675   60833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:33.214560   60833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 21:09:33.234926   60833 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:33.238913   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:33.255261   60833 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188 for IP: 192.168.50.163
	I1212 21:09:33.255320   60833 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:33.255462   60833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:33.255496   60833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:33.255561   60833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/client.key
	I1212 21:09:33.255641   60833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key.6a576ed8
	I1212 21:09:33.255686   60833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key
	I1212 21:09:33.255781   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:33.255807   60833 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:33.255814   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:33.255835   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:33.255864   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:33.255885   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:33.255931   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:33.256505   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:33.282336   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:33.307179   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:33.332468   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:09:33.357444   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:33.383372   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:33.409070   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:33.438164   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:33.467676   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:33.496645   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:33.523126   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:33.548366   60833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:33.567745   60833 ssh_runner.go:195] Run: openssl version
	I1212 21:09:33.573716   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:33.584221   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589689   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589767   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.595880   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:33.609574   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:33.623129   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629541   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629615   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.635862   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:33.646421   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:33.656686   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661397   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661473   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.667092   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:33.677905   60833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:33.682795   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:33.689346   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:33.695822   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:33.702368   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:33.708500   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:33.714793   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:33.721121   60833 kubeadm.go:404] StartCluster: {Name:embed-certs-831188 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:33.721252   60833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:33.721319   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:33.759428   60833 cri.go:89] found id: ""
	I1212 21:09:33.759502   60833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:33.769592   60833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:33.769617   60833 kubeadm.go:636] restartCluster start
	I1212 21:09:33.769712   60833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:33.779313   60833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.780838   60833 kubeconfig.go:92] found "embed-certs-831188" server: "https://192.168.50.163:8443"
	I1212 21:09:33.784096   60833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:33.793192   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.793314   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.805112   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.805139   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.805196   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.816975   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.317757   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.317858   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.329702   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.817167   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.817266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.828633   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.317136   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.317230   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.328803   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.818032   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.818121   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.829428   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.318141   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.318253   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.330749   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.817284   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.817367   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.828787   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:37.317183   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.317266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.334557   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.350131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:38.350522   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:38.350546   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:38.350484   61894 retry.go:31] will retry after 2.423656351s: waiting for machine to come up
	I1212 21:09:40.777036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:40.777455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:40.777489   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:40.777399   61894 retry.go:31] will retry after 3.275180742s: waiting for machine to come up
	I1212 21:09:37.817090   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.817219   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.833813   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.317328   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.317409   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.334684   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.817255   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.817353   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.831011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.317555   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.317648   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.330189   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.817759   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.817866   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.830611   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.317127   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.317198   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.329508   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.817580   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.817677   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.829289   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.317853   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.317928   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.331394   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.818013   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.818098   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.829011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:42.317526   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.317610   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.329211   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:44.056058   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:44.056558   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:44.056587   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:44.056517   61894 retry.go:31] will retry after 4.729711581s: waiting for machine to come up
	I1212 21:09:42.818081   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.818166   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.829930   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.317420   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:43.317526   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:43.328536   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.794084   60833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:09:43.794118   60833 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:09:43.794129   60833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:09:43.794192   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:43.842360   60833 cri.go:89] found id: ""
	I1212 21:09:43.842431   60833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:09:43.859189   60833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:09:43.869065   60833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:09:43.869135   60833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878614   60833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878644   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.011533   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.544591   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.757944   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.850440   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.942874   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:09:44.942967   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:44.954886   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.466556   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.966545   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.465991   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.966021   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.987348   60833 api_server.go:72] duration metric: took 2.04447632s to wait for apiserver process to appear ...
	I1212 21:09:46.987374   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:09:46.987388   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.987890   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:46.987926   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.988389   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:50.008527   61298 start.go:369] acquired machines lock for "default-k8s-diff-port-171828" in 3m47.787737833s
	I1212 21:09:50.008595   61298 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:50.008607   61298 fix.go:54] fixHost starting: 
	I1212 21:09:50.008999   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:50.009035   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:50.025692   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I1212 21:09:50.026047   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:50.026541   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:09:50.026563   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:50.026945   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:50.027160   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:09:50.027344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:09:50.029005   61298 fix.go:102] recreateIfNeeded on default-k8s-diff-port-171828: state=Stopped err=<nil>
	I1212 21:09:50.029031   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	W1212 21:09:50.029193   61298 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:50.031805   61298 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-171828" ...
	I1212 21:09:48.789770   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790158   60948 main.go:141] libmachine: (old-k8s-version-372099) Found IP for machine: 192.168.39.202
	I1212 21:09:48.790172   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserving static IP address...
	I1212 21:09:48.790195   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790655   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.790683   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserved static IP address: 192.168.39.202
	I1212 21:09:48.790701   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | skip adding static IP to network mk-old-k8s-version-372099 - found existing host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"}
	I1212 21:09:48.790719   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Getting to WaitForSSH function...
	I1212 21:09:48.790736   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting for SSH to be available...
	I1212 21:09:48.793069   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793392   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.793418   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793542   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH client type: external
	I1212 21:09:48.793582   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa (-rw-------)
	I1212 21:09:48.793610   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:48.793620   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | About to run SSH command:
	I1212 21:09:48.793629   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | exit 0
	I1212 21:09:48.883487   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:48.883885   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetConfigRaw
	I1212 21:09:48.884519   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:48.887128   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.887485   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887734   60948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/config.json ...
	I1212 21:09:48.887918   60948 machine.go:88] provisioning docker machine ...
	I1212 21:09:48.887936   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:48.888097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888225   60948 buildroot.go:166] provisioning hostname "old-k8s-version-372099"
	I1212 21:09:48.888238   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888378   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:48.890462   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890820   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.890847   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890982   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:48.891139   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891289   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:48.891597   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:48.891940   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:48.891955   60948 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-372099 && echo "old-k8s-version-372099" | sudo tee /etc/hostname
	I1212 21:09:49.012923   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-372099
	
	I1212 21:09:49.012954   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.015698   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016076   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.016117   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016245   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.016437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016583   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016710   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.016859   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.017308   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.017338   60948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-372099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-372099/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-372099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:49.144804   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:49.144842   60948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:49.144875   60948 buildroot.go:174] setting up certificates
	I1212 21:09:49.144885   60948 provision.go:83] configureAuth start
	I1212 21:09:49.144896   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:49.145181   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:49.147947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148294   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.148340   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148475   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.151218   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.151697   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.151760   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.152022   60948 provision.go:138] copyHostCerts
	I1212 21:09:49.152083   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:49.152102   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:49.152172   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:49.152299   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:49.152307   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:49.152335   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:49.152402   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:49.152407   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:49.152428   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:49.152485   60948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-372099 san=[192.168.39.202 192.168.39.202 localhost 127.0.0.1 minikube old-k8s-version-372099]
	I1212 21:09:49.298406   60948 provision.go:172] copyRemoteCerts
	I1212 21:09:49.298478   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:49.298508   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.301384   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301696   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.301729   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301948   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.302156   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.302320   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.302442   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.385046   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:09:49.409667   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:49.434002   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:09:49.458872   60948 provision.go:86] duration metric: configureAuth took 313.97378ms
	I1212 21:09:49.458907   60948 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:49.459075   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:09:49.459143   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.461794   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.462183   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462373   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.462574   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462730   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462857   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.463042   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.463594   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.463641   60948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:49.767652   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:49.767745   60948 machine.go:91] provisioned docker machine in 879.803204ms
	I1212 21:09:49.767772   60948 start.go:300] post-start starting for "old-k8s-version-372099" (driver="kvm2")
	I1212 21:09:49.767785   60948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:49.767812   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:49.768162   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:49.768191   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.770970   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771351   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.771388   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771595   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.771805   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.772009   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.772155   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.857053   60948 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:49.861510   60948 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:49.861535   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:49.861600   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:49.861672   60948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:49.861781   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:49.869967   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:49.892746   60948 start.go:303] post-start completed in 124.959403ms
	I1212 21:09:49.892768   60948 fix.go:56] fixHost completed within 23.468514721s
	I1212 21:09:49.892790   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.895273   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895618   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.895653   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895776   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.895951   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896269   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.896433   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.896887   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.896904   60948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:50.008384   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415389.953345991
	
	I1212 21:09:50.008407   60948 fix.go:206] guest clock: 1702415389.953345991
	I1212 21:09:50.008415   60948 fix.go:219] Guest: 2023-12-12 21:09:49.953345991 +0000 UTC Remote: 2023-12-12 21:09:49.89277138 +0000 UTC m=+292.853960893 (delta=60.574611ms)
	I1212 21:09:50.008441   60948 fix.go:190] guest clock delta is within tolerance: 60.574611ms
	I1212 21:09:50.008445   60948 start.go:83] releasing machines lock for "old-k8s-version-372099", held for 23.584233709s
	I1212 21:09:50.008469   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.008757   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:50.011577   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.011930   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.011958   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.012109   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012750   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012964   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.013059   60948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:50.013102   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.013195   60948 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:50.013222   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.016031   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016304   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016525   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016566   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016720   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.016815   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016855   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016883   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017008   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.017080   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017186   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017256   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.017357   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017520   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.125100   60948 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:50.132264   60948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:50.278965   60948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:50.286230   60948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:50.286308   60948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:50.301165   60948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:50.301192   60948 start.go:475] detecting cgroup driver to use...
	I1212 21:09:50.301256   60948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:50.318715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:50.331943   60948 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:50.332013   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:50.348872   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:50.366970   60948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:50.492936   60948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:50.620103   60948 docker.go:219] disabling docker service ...
	I1212 21:09:50.620185   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:50.632962   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:50.644797   60948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:50.759039   60948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:50.884352   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:50.896549   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:50.919987   60948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 21:09:50.920056   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.932147   60948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:50.932224   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.941195   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.951010   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.962752   60948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:50.975125   60948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:50.984906   60948 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:50.984971   60948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:50.999594   60948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:51.010344   60948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:51.114607   60948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:51.318020   60948 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:51.318108   60948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:51.325048   60948 start.go:543] Will wait 60s for crictl version
	I1212 21:09:51.325134   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:51.329905   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:51.377974   60948 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:51.378075   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.444024   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.512531   60948 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 21:09:51.514171   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:51.517083   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517636   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:51.517667   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517886   60948 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:51.522137   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:51.538124   60948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 21:09:51.538191   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:51.594603   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:51.594688   60948 ssh_runner.go:195] Run: which lz4
	I1212 21:09:51.599732   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:51.604811   60948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:51.604844   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 21:09:50.033553   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Start
	I1212 21:09:50.033768   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring networks are active...
	I1212 21:09:50.034638   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network default is active
	I1212 21:09:50.035192   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network mk-default-k8s-diff-port-171828 is active
	I1212 21:09:50.035630   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Getting domain xml...
	I1212 21:09:50.036380   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Creating domain...
	I1212 21:09:51.530274   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting to get IP...
	I1212 21:09:51.531329   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531766   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.531744   62039 retry.go:31] will retry after 271.90604ms: waiting for machine to come up
	I1212 21:09:51.805469   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806028   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806062   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.805967   62039 retry.go:31] will retry after 338.221769ms: waiting for machine to come up
	I1212 21:09:47.488610   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.543731   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.543786   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.543807   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.654915   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.654949   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.989408   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.996278   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:51.996337   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.488734   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.496289   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:52.496327   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.989065   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.997013   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:09:53.012736   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:09:53.012777   60833 api_server.go:131] duration metric: took 6.025395735s to wait for apiserver health ...
	I1212 21:09:53.012789   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:53.012806   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:53.014820   60833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:09:53.016797   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:09:53.047434   60833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:09:53.095811   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:09:53.115354   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:09:53.115441   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:09:53.115465   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:09:53.115504   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:09:53.115532   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:09:53.115551   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:09:53.115582   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:09:53.115607   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:09:53.115633   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:09:53.115643   60833 system_pods.go:74] duration metric: took 19.808922ms to wait for pod list to return data ...
	I1212 21:09:53.115655   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:09:53.127006   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:09:53.127044   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:09:53.127058   60833 node_conditions.go:105] duration metric: took 11.39604ms to run NodePressure ...
	I1212 21:09:53.127079   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:53.597509   60833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603447   60833 kubeadm.go:787] kubelet initialised
	I1212 21:09:53.603476   60833 kubeadm.go:788] duration metric: took 5.932359ms waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603486   60833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:53.616570   60833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.623514   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623547   60833 pod_ready.go:81] duration metric: took 6.940441ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.623560   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623570   60833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.631395   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631426   60833 pod_ready.go:81] duration metric: took 7.844548ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.631438   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631453   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.649647   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649681   60833 pod_ready.go:81] duration metric: took 18.215042ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.649693   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649702   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.662239   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662271   60833 pod_ready.go:81] duration metric: took 12.552977ms waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.662285   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662298   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.005841   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005879   60833 pod_ready.go:81] duration metric: took 343.569867ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.005892   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005908   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.403249   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403280   60833 pod_ready.go:81] duration metric: took 397.363687ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.403291   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403297   60833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.802330   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802367   60833 pod_ready.go:81] duration metric: took 399.057426ms waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.802380   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802390   60833 pod_ready.go:38] duration metric: took 1.198894195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:54.802413   60833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:09:54.822125   60833 ops.go:34] apiserver oom_adj: -16
	I1212 21:09:54.822154   60833 kubeadm.go:640] restartCluster took 21.052529291s
	I1212 21:09:54.822173   60833 kubeadm.go:406] StartCluster complete in 21.101061651s
	I1212 21:09:54.822194   60833 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.822273   60833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:09:54.825185   60833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.825490   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:09:54.825622   60833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:09:54.825714   60833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831188"
	I1212 21:09:54.825735   60833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-831188"
	W1212 21:09:54.825756   60833 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:09:54.825806   60833 addons.go:69] Setting metrics-server=true in profile "embed-certs-831188"
	I1212 21:09:54.825837   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.825849   60833 addons.go:231] Setting addon metrics-server=true in "embed-certs-831188"
	W1212 21:09:54.825863   60833 addons.go:240] addon metrics-server should already be in state true
	I1212 21:09:54.825969   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.826276   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826309   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826522   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826588   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826731   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:54.826767   60833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831188"
	I1212 21:09:54.826847   60833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831188"
	I1212 21:09:54.827349   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.827409   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.834506   60833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-831188" context rescaled to 1 replicas
	I1212 21:09:54.834614   60833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:09:54.837122   60833 out.go:177] * Verifying Kubernetes components...
	I1212 21:09:54.839094   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:09:54.846081   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I1212 21:09:54.846737   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847078   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I1212 21:09:54.847367   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.847387   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.847518   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847775   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848031   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.848053   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.848061   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.848355   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848912   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.848955   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.849635   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1212 21:09:54.849986   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.852255   60833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-831188"
	W1212 21:09:54.852279   60833 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:09:54.852306   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.852727   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.852758   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.853259   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.853289   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.853643   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.854187   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.854223   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.870249   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I1212 21:09:54.870805   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.871406   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.871430   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.871920   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.872090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.873692   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.876011   60833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:54.874681   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1212 21:09:54.877102   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1212 21:09:54.877666   60833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:54.877691   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:09:54.877710   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.877993   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878108   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878602   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878622   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.878738   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878754   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.879004   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.879362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.879426   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.880445   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.880486   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.881642   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.883715   60833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:09:54.885165   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:09:54.885184   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:09:54.885199   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.883021   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.883884   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.885257   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.885295   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.885442   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.885598   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.885727   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.893093   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893096   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.893152   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.893190   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.893534   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.893676   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.902833   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1212 21:09:54.903320   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.903867   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.903888   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.904337   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.904535   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.906183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.906443   60833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:54.906463   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:09:54.906484   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.909330   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.909914   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.909954   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.910136   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.910328   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.910492   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.910639   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:55.020642   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:55.123475   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:55.141398   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:09:55.141429   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:09:55.200799   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:09:55.200833   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:09:55.275142   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:55.275172   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:09:55.308985   60833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831188" to be "Ready" ...
	I1212 21:09:55.309133   60833 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:09:55.341251   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:56.829715   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706199185s)
	I1212 21:09:56.829768   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829780   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.829784   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.809111646s)
	I1212 21:09:56.829860   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829870   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830143   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.830166   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.830178   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.830188   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830267   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.831959   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832013   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.832048   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.831765   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831788   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831794   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.832139   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832236   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.833156   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.833196   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.843517   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.843542   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.843815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.843870   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.843880   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.023745   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.682445607s)
	I1212 21:09:57.023801   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.023815   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024252   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024263   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:57.024276   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024287   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.024303   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024676   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024691   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024706   60833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-831188"
	I1212 21:09:53.564404   60948 crio.go:444] Took 1.964711 seconds to copy over tarball
	I1212 21:09:53.564488   60948 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:57.052627   60948 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.488106402s)
	I1212 21:09:57.052657   60948 crio.go:451] Took 3.488218 seconds to extract the tarball
	I1212 21:09:57.052669   60948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:52.145724   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146453   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146484   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.146352   62039 retry.go:31] will retry after 482.98499ms: waiting for machine to come up
	I1212 21:09:52.630862   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631317   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631343   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.631232   62039 retry.go:31] will retry after 480.323704ms: waiting for machine to come up
	I1212 21:09:53.113661   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.114249   62039 retry.go:31] will retry after 649.543956ms: waiting for machine to come up
	I1212 21:09:53.765102   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765613   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765643   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.765558   62039 retry.go:31] will retry after 824.137815ms: waiting for machine to come up
	I1212 21:09:54.591782   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592356   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592391   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:54.592273   62039 retry.go:31] will retry after 874.563899ms: waiting for machine to come up
	I1212 21:09:55.468934   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469429   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469459   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:55.469393   62039 retry.go:31] will retry after 1.224276076s: waiting for machine to come up
	I1212 21:09:56.695111   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695604   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695637   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:56.695560   62039 retry.go:31] will retry after 1.207984075s: waiting for machine to come up
	I1212 21:09:57.157310   60833 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:09:57.322702   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:57.093318   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:57.723104   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:57.723132   60948 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:09:57.723259   60948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.723342   60948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.723442   60948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 21:09:57.723302   60948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724835   60948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 21:09:57.724864   60948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.724861   60948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.724836   60948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724853   60948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.724842   60948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.724847   60948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.724893   60948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.918047   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.920893   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.927072   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.928080   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.931259   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 21:09:57.932017   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.939580   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.990594   60948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 21:09:57.990667   60948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.990724   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.059759   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:58.095401   60948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 21:09:58.095451   60948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.095504   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138192   60948 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 21:09:58.138287   60948 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.138333   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138491   60948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 21:09:58.138532   60948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.138594   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145060   60948 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 21:09:58.145116   60948 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 21:09:58.145146   60948 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 21:09:58.145177   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145185   60948 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.145225   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145073   60948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 21:09:58.145250   60948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.145271   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145322   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:58.268621   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.268721   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.268774   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.268826   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 21:09:58.268863   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.268895   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.268956   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 21:09:58.408748   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 21:09:58.418795   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 21:09:58.418843   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 21:09:58.420451   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 21:09:58.420516   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 21:09:58.420577   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.420585   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 21:09:58.425621   60948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 21:09:58.425639   60948 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.425684   60948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 21:09:59.172682   60948 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 21:09:59.172736   60948 cache_images.go:92] LoadImages completed in 1.449590507s
	W1212 21:09:59.172819   60948 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 21:09:59.172900   60948 ssh_runner.go:195] Run: crio config
	I1212 21:09:59.238502   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:09:59.238522   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:59.238539   60948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:59.238560   60948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-372099 NodeName:old-k8s-version-372099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 21:09:59.238733   60948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-372099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-372099
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.202:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:59.238886   60948 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-372099 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:59.238953   60948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 21:09:59.249183   60948 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:59.249271   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:59.263171   60948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 21:09:59.281172   60948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:59.302622   60948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 21:09:59.323131   60948 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:59.327344   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:59.342182   60948 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099 for IP: 192.168.39.202
	I1212 21:09:59.342216   60948 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:59.342412   60948 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:59.342465   60948 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:59.342554   60948 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/client.key
	I1212 21:09:59.342659   60948 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key.9e66e972
	I1212 21:09:59.342723   60948 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key
	I1212 21:09:59.342854   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:59.342891   60948 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:59.342908   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:59.342947   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:59.342984   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:59.343024   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:59.343081   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:59.343948   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:59.375250   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:59.404892   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:59.434762   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:09:59.465696   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:59.496528   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:59.521739   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:59.545606   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:59.574153   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:59.599089   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:59.625217   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:59.654715   60948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:59.674946   60948 ssh_runner.go:195] Run: openssl version
	I1212 21:09:59.683295   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:59.697159   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702671   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702745   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.710931   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:59.723204   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:59.735713   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741621   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741715   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.748041   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:59.760217   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:59.772701   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778501   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778589   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.787066   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:59.803355   60948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:59.809920   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:59.819093   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:59.827918   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:59.836228   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:59.845437   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:59.852647   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:59.861170   60948 kubeadm.go:404] StartCluster: {Name:old-k8s-version-372099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:59.861285   60948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:59.861358   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:59.906807   60948 cri.go:89] found id: ""
	I1212 21:09:59.906885   60948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:59.919539   60948 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:59.919579   60948 kubeadm.go:636] restartCluster start
	I1212 21:09:59.919637   60948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:59.930547   60948 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.931845   60948 kubeconfig.go:92] found "old-k8s-version-372099" server: "https://192.168.39.202:8443"
	I1212 21:09:59.934471   60948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:59.945701   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.945780   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.959415   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.959438   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.959496   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.975677   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.476388   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.476469   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.493781   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.976367   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.976475   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.993084   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.476277   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.476362   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.490076   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.976393   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.976505   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.990771   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:57.905327   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905730   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:57.905649   62039 retry.go:31] will retry after 1.427858275s: waiting for machine to come up
	I1212 21:09:59.335284   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335735   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:59.335630   62039 retry.go:31] will retry after 1.773169552s: waiting for machine to come up
	I1212 21:10:01.110044   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110533   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:01.110468   62039 retry.go:31] will retry after 2.199207847s: waiting for machine to come up
	I1212 21:09:57.672094   60833 addons.go:502] enable addons completed in 2.846462968s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:09:59.822907   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:01.824673   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:02.325980   60833 node_ready.go:49] node "embed-certs-831188" has status "Ready":"True"
	I1212 21:10:02.326008   60833 node_ready.go:38] duration metric: took 7.016985612s waiting for node "embed-certs-831188" to be "Ready" ...
	I1212 21:10:02.326021   60833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:02.339547   60833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345609   60833 pod_ready.go:92] pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.345638   60833 pod_ready.go:81] duration metric: took 6.052243ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345652   60833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.476354   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.476429   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.489326   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:02.975846   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.975935   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.992975   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.476463   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.476577   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.489471   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.975762   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.975891   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.992773   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.476395   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.476510   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.489163   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.976403   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.976503   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.990508   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.475988   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.476108   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.489347   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.975811   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.975874   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.988996   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.475817   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.475896   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.487886   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.976376   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.976445   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.988627   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.312460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312859   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312892   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:03.312807   62039 retry.go:31] will retry after 4.329332977s: waiting for machine to come up
	I1212 21:10:02.864894   60833 pod_ready.go:92] pod "etcd-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.864921   60833 pod_ready.go:81] duration metric: took 519.26143ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.864935   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871360   60833 pod_ready.go:92] pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.871392   60833 pod_ready.go:81] duration metric: took 6.449389ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871406   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529203   60833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.529228   60833 pod_ready.go:81] duration metric: took 1.657813273s waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529243   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722607   60833 pod_ready.go:92] pod "kube-proxy-nsv4w" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.722631   60833 pod_ready.go:81] duration metric: took 193.381057ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722641   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124360   60833 pod_ready.go:92] pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:05.124388   60833 pod_ready.go:81] duration metric: took 401.739767ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124401   60833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:07.476521   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.476603   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.487362   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:07.976016   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.976101   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.987221   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.475793   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.475894   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.486641   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.976140   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.976262   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.987507   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.476080   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:09.476168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:09.487537   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.946342   60948 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:09.946377   60948 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:09.946412   60948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:09.946487   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:09.988850   60948 cri.go:89] found id: ""
	I1212 21:10:09.988939   60948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:10.004726   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:10.015722   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:10.015787   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025706   60948 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025743   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:10.156614   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.030056   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.219060   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.315587   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.398016   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:11.398110   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.411642   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.927297   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:07.644473   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644921   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644950   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:07.644868   62039 retry.go:31] will retry after 5.180616294s: waiting for machine to come up
	I1212 21:10:07.428366   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:09.929940   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.157275   60628 start.go:369] acquired machines lock for "no-preload-343495" in 1m3.684137096s
	I1212 21:10:14.157330   60628 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:10:14.157342   60628 fix.go:54] fixHost starting: 
	I1212 21:10:14.157767   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:14.157812   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:14.175936   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 21:10:14.176421   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:14.176957   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:10:14.176982   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:14.177380   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:14.177601   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:14.177804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:10:14.179672   60628 fix.go:102] recreateIfNeeded on no-preload-343495: state=Stopped err=<nil>
	I1212 21:10:14.179696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	W1212 21:10:14.179911   60628 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:10:14.183064   60628 out.go:177] * Restarting existing kvm2 VM for "no-preload-343495" ...
	I1212 21:10:12.828825   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.829471   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Found IP for machine: 192.168.72.253
	I1212 21:10:12.829501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserving static IP address...
	I1212 21:10:12.829530   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has current primary IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.830061   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.830110   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | skip adding static IP to network mk-default-k8s-diff-port-171828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"}
	I1212 21:10:12.830133   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserved static IP address: 192.168.72.253
	I1212 21:10:12.830152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Getting to WaitForSSH function...
	I1212 21:10:12.830163   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for SSH to be available...
	I1212 21:10:12.832654   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833033   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.833065   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833273   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH client type: external
	I1212 21:10:12.833302   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa (-rw-------)
	I1212 21:10:12.833335   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:12.833352   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | About to run SSH command:
	I1212 21:10:12.833370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | exit 0
	I1212 21:10:12.931871   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:12.932439   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetConfigRaw
	I1212 21:10:12.933250   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:12.936555   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937009   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.937051   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937341   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:10:12.937642   61298 machine.go:88] provisioning docker machine ...
	I1212 21:10:12.937669   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:12.937933   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938136   61298 buildroot.go:166] provisioning hostname "default-k8s-diff-port-171828"
	I1212 21:10:12.938161   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938373   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:12.941209   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941589   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.941620   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941796   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:12.941978   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942183   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:12.942539   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:12.942885   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:12.942904   61298 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-171828 && echo "default-k8s-diff-port-171828" | sudo tee /etc/hostname
	I1212 21:10:13.099123   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-171828
	
	I1212 21:10:13.099152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.102085   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.102496   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102756   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.102965   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103166   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.103580   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.104000   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.104034   61298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-171828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-171828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-171828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:13.246501   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:13.246535   61298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:13.246561   61298 buildroot.go:174] setting up certificates
	I1212 21:10:13.246577   61298 provision.go:83] configureAuth start
	I1212 21:10:13.246590   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:13.246875   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:13.249703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250010   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.250043   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250196   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.252501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.252814   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.252852   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.253086   61298 provision.go:138] copyHostCerts
	I1212 21:10:13.253151   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:13.253171   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:13.253266   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:13.253399   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:13.253412   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:13.253437   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:13.253501   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:13.253508   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:13.253526   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:13.253586   61298 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-171828 san=[192.168.72.253 192.168.72.253 localhost 127.0.0.1 minikube default-k8s-diff-port-171828]
	I1212 21:10:13.331755   61298 provision.go:172] copyRemoteCerts
	I1212 21:10:13.331819   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:13.331841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.334412   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334741   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.334777   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334981   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.335185   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.335369   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.335498   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.429448   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:10:13.454350   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:13.479200   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 21:10:13.505120   61298 provision.go:86] duration metric: configureAuth took 258.53005ms
	I1212 21:10:13.505151   61298 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:13.505370   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:13.505451   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.508400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.508826   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.508858   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.509144   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.509360   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509524   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.509829   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.510161   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.510184   61298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:13.874783   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:13.874810   61298 machine.go:91] provisioned docker machine in 937.151566ms
	I1212 21:10:13.874822   61298 start.go:300] post-start starting for "default-k8s-diff-port-171828" (driver="kvm2")
	I1212 21:10:13.874835   61298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:13.874853   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:13.875182   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:13.875213   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.877937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.878400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878640   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.878819   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.878984   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.879148   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.978276   61298 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:13.984077   61298 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:13.984114   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:13.984229   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:13.984309   61298 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:13.984391   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:13.996801   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:14.021773   61298 start.go:303] post-start completed in 146.935628ms
	I1212 21:10:14.021796   61298 fix.go:56] fixHost completed within 24.013191129s
	I1212 21:10:14.021815   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.024847   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.025227   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.025599   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025788   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025951   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.026106   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:14.026436   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:14.026452   61298 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:14.157053   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415414.138141396
	
	I1212 21:10:14.157082   61298 fix.go:206] guest clock: 1702415414.138141396
	I1212 21:10:14.157092   61298 fix.go:219] Guest: 2023-12-12 21:10:14.138141396 +0000 UTC Remote: 2023-12-12 21:10:14.021800288 +0000 UTC m=+251.962428882 (delta=116.341108ms)
	I1212 21:10:14.157130   61298 fix.go:190] guest clock delta is within tolerance: 116.341108ms
	I1212 21:10:14.157141   61298 start.go:83] releasing machines lock for "default-k8s-diff-port-171828", held for 24.148576854s
	I1212 21:10:14.157193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.157567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:14.160748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161134   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.161172   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161489   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162089   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162259   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162333   61298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:14.162389   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.162627   61298 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:14.162652   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.165726   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.165941   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166485   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166548   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166598   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166636   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166649   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166905   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166907   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167104   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167153   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167231   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.167349   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167500   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.294350   61298 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:14.301705   61298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:14.459967   61298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:14.467979   61298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:14.468043   61298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:14.483883   61298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:14.483910   61298 start.go:475] detecting cgroup driver to use...
	I1212 21:10:14.483976   61298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:14.498105   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:14.511716   61298 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:14.511784   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:14.525795   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:14.539213   61298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:14.658453   61298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:14.786222   61298 docker.go:219] disabling docker service ...
	I1212 21:10:14.786296   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:14.801656   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:14.814821   61298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:14.950542   61298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:15.085306   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:15.098508   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:15.118634   61298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:15.118709   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.130579   61298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:15.130667   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.140672   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.150340   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.161966   61298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:15.173049   61298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:15.181620   61298 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:15.181703   61298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:15.195505   61298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:15.204076   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:15.327587   61298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:15.505003   61298 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:15.505078   61298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:15.512282   61298 start.go:543] Will wait 60s for crictl version
	I1212 21:10:15.512349   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:10:15.516564   61298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:15.556821   61298 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:15.556906   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.612743   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.665980   61298 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:10:12.426883   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.927168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.962834   60948 api_server.go:72] duration metric: took 1.56481721s to wait for apiserver process to appear ...
	I1212 21:10:12.962862   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:12.962890   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.963447   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:12.963489   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.964022   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:13.464393   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:15.667323   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:15.670368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.670769   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:15.670804   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.671037   61298 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:15.675575   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:15.688523   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:10:15.688602   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:15.739601   61298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:10:15.739718   61298 ssh_runner.go:195] Run: which lz4
	I1212 21:10:15.744272   61298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:10:15.749574   61298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:10:15.749612   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:10:12.428614   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.430542   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:16.442797   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.184429   60628 main.go:141] libmachine: (no-preload-343495) Calling .Start
	I1212 21:10:14.184692   60628 main.go:141] libmachine: (no-preload-343495) Ensuring networks are active...
	I1212 21:10:14.186580   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network default is active
	I1212 21:10:14.187398   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network mk-no-preload-343495 is active
	I1212 21:10:14.188587   60628 main.go:141] libmachine: (no-preload-343495) Getting domain xml...
	I1212 21:10:14.189457   60628 main.go:141] libmachine: (no-preload-343495) Creating domain...
	I1212 21:10:15.509306   60628 main.go:141] libmachine: (no-preload-343495) Waiting to get IP...
	I1212 21:10:15.510320   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.510728   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.510772   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.510702   62255 retry.go:31] will retry after 275.567053ms: waiting for machine to come up
	I1212 21:10:15.788793   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.789233   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.789262   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.789193   62255 retry.go:31] will retry after 341.343409ms: waiting for machine to come up
	I1212 21:10:16.131936   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.132427   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.132452   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.132377   62255 retry.go:31] will retry after 302.905542ms: waiting for machine to come up
	I1212 21:10:16.437184   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.437944   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.437968   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.437850   62255 retry.go:31] will retry after 407.178114ms: waiting for machine to come up
	I1212 21:10:16.846738   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.847393   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.847429   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.847349   62255 retry.go:31] will retry after 507.703222ms: waiting for machine to come up
	I1212 21:10:17.357373   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:17.357975   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:17.358005   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:17.357907   62255 retry.go:31] will retry after 920.403188ms: waiting for machine to come up
	I1212 21:10:18.464726   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 21:10:18.464781   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.736922   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.736969   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.736990   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.816132   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.816165   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.964508   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.012996   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.013048   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.464538   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.509558   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.509601   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.965183   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:21.369579   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:10:21.381334   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:10:21.381365   60948 api_server.go:131] duration metric: took 8.418495294s to wait for apiserver health ...
	I1212 21:10:21.381378   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.381385   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.501371   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:21.801933   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:21.827010   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:21.853900   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:17.641827   61298 crio.go:444] Took 1.897583 seconds to copy over tarball
	I1212 21:10:17.641919   61298 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:10:21.283045   61298 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.641094924s)
	I1212 21:10:21.283076   61298 crio.go:451] Took 3.641222 seconds to extract the tarball
	I1212 21:10:21.283088   61298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:10:21.328123   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:21.387894   61298 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:10:21.387923   61298 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:10:21.387996   61298 ssh_runner.go:195] Run: crio config
	I1212 21:10:21.467191   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.467216   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.467255   61298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:21.467278   61298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-171828 NodeName:default-k8s-diff-port-171828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:21.467443   61298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-171828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:21.467537   61298 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-171828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 21:10:21.467596   61298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:10:21.478940   61298 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:21.479024   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:21.492604   61298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 21:10:21.514260   61298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:10:21.535059   61298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 21:10:21.557074   61298 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:21.562765   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:21.578989   61298 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828 for IP: 192.168.72.253
	I1212 21:10:21.579047   61298 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:21.579282   61298 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:21.579383   61298 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:21.579495   61298 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/client.key
	I1212 21:10:21.768212   61298 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key.a1600f99
	I1212 21:10:21.768305   61298 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key
	I1212 21:10:21.768447   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:21.768489   61298 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:21.768504   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:21.768542   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:21.768596   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:21.768625   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:21.768680   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:21.769557   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:21.800794   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:10:21.833001   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:21.864028   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:21.893107   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:21.918580   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:21.944095   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:21.970251   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:21.998947   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:22.027620   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:22.056851   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:22.084321   61298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:22.103273   61298 ssh_runner.go:195] Run: openssl version
	I1212 21:10:22.109518   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:18.932477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:21.431431   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:18.280164   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:18.280656   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:18.280687   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:18.280612   62255 retry.go:31] will retry after 761.825655ms: waiting for machine to come up
	I1212 21:10:19.043686   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:19.044170   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:19.044203   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:19.044117   62255 retry.go:31] will retry after 1.173408436s: waiting for machine to come up
	I1212 21:10:20.218938   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:20.219457   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:20.219488   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:20.219412   62255 retry.go:31] will retry after 1.484817124s: waiting for machine to come up
	I1212 21:10:21.706027   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:21.706505   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:21.706536   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:21.706467   62255 retry.go:31] will retry after 2.260831172s: waiting for machine to come up
	I1212 21:10:22.159195   60948 system_pods.go:59] 7 kube-system pods found
	I1212 21:10:22.284903   60948 system_pods.go:61] "coredns-5644d7b6d9-slvnx" [0db32241-69df-48dc-a60f-6921f9c5746f] Running
	I1212 21:10:22.284916   60948 system_pods.go:61] "etcd-old-k8s-version-372099" [72d219cb-b393-423d-ba62-b880bd2d26a0] Running
	I1212 21:10:22.284924   60948 system_pods.go:61] "kube-apiserver-old-k8s-version-372099" [c4f09d2d-07d2-4403-886b-37cb1471e7e5] Running
	I1212 21:10:22.284932   60948 system_pods.go:61] "kube-controller-manager-old-k8s-version-372099" [4a17c60c-2c72-4296-a7e4-0ae05e7bfa39] Running
	I1212 21:10:22.284939   60948 system_pods.go:61] "kube-proxy-5mvzb" [ec7c6540-35e2-4ae4-8592-d797132a8328] Running
	I1212 21:10:22.284945   60948 system_pods.go:61] "kube-scheduler-old-k8s-version-372099" [472284a4-9340-4bbc-8a1f-b9b55f4b0c3c] Running
	I1212 21:10:22.284952   60948 system_pods.go:61] "storage-provisioner" [b9fcec5f-bd1f-4c47-95cd-a9c8e3011e50] Running
	I1212 21:10:22.284961   60948 system_pods.go:74] duration metric: took 431.035724ms to wait for pod list to return data ...
	I1212 21:10:22.284990   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:22.592700   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:22.592734   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:22.592748   60948 node_conditions.go:105] duration metric: took 307.751463ms to run NodePressure ...
	I1212 21:10:22.592770   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:23.483331   60948 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:23.500661   60948 retry.go:31] will retry after 162.846257ms: kubelet not initialised
	I1212 21:10:23.669569   60948 retry.go:31] will retry after 257.344573ms: kubelet not initialised
	I1212 21:10:23.942373   60948 retry.go:31] will retry after 538.191385ms: kubelet not initialised
	I1212 21:10:24.487436   60948 retry.go:31] will retry after 635.824669ms: kubelet not initialised
	I1212 21:10:25.129226   60948 retry.go:31] will retry after 946.117517ms: kubelet not initialised
	I1212 21:10:26.082106   60948 retry.go:31] will retry after 2.374588936s: kubelet not initialised
	I1212 21:10:22.121093   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291519   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291585   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.297989   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:22.309847   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:22.321817   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326715   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326766   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.333001   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:22.345044   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:22.357827   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362795   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362858   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.368864   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:22.380605   61298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:22.385986   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:22.392931   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:22.399683   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:22.407203   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:22.414730   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:22.421808   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:22.430050   61298 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:22.430205   61298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:22.430263   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:22.482907   61298 cri.go:89] found id: ""
	I1212 21:10:22.482981   61298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:22.495001   61298 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:22.495032   61298 kubeadm.go:636] restartCluster start
	I1212 21:10:22.495104   61298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:22.506418   61298 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.508078   61298 kubeconfig.go:92] found "default-k8s-diff-port-171828" server: "https://192.168.72.253:8444"
	I1212 21:10:22.511809   61298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:22.523641   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.523703   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.536887   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.536913   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.536965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.549418   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.050111   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.050218   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.063845   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.550201   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.550303   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.567468   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.050021   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.050193   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.064792   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.550119   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.550213   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.568169   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.049891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.049997   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.063341   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.549592   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.549682   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.564096   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.049596   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.049701   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.063482   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.549680   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.549793   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.563956   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.049482   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.049614   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.062881   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.440487   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:25.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:23.969715   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:23.970242   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:23.970272   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:23.970200   62255 retry.go:31] will retry after 1.769886418s: waiting for machine to come up
	I1212 21:10:25.741628   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:25.742060   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:25.742098   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:25.742014   62255 retry.go:31] will retry after 2.283589137s: waiting for machine to come up
	I1212 21:10:28.462838   60948 retry.go:31] will retry after 1.809333362s: kubelet not initialised
	I1212 21:10:30.278747   60948 retry.go:31] will retry after 4.059791455s: kubelet not initialised
	I1212 21:10:27.550084   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.550176   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.564365   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.049688   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.049771   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.065367   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.549922   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.550009   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.566964   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.049535   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.049643   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.062264   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.549891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.549970   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.563687   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.050397   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.050492   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.065602   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.550210   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.550298   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.562793   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.050281   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.050374   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.064836   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.550407   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.550527   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.563474   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:32.049593   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:32.049689   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:32.062459   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.935166   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:30.429274   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:28.028345   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:28.028796   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:28.028824   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:28.028757   62255 retry.go:31] will retry after 4.021160394s: waiting for machine to come up
	I1212 21:10:32.052992   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:32.053479   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:32.053506   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:32.053442   62255 retry.go:31] will retry after 4.864494505s: waiting for machine to come up
	I1212 21:10:34.344571   60948 retry.go:31] will retry after 9.338953291s: kubelet not initialised
	I1212 21:10:32.524460   61298 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:32.524492   61298 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:32.524523   61298 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:32.524586   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:32.565596   61298 cri.go:89] found id: ""
	I1212 21:10:32.565685   61298 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:32.582458   61298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:32.592539   61298 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:32.592615   61298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603658   61298 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603683   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:32.730418   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.535390   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.742601   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.839081   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.909128   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:33.909209   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:33.928197   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.452146   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.952473   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.452270   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.952431   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.451626   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.482100   61298 api_server.go:72] duration metric: took 2.572973799s to wait for apiserver process to appear ...
	I1212 21:10:36.482125   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:36.482154   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.482833   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.482869   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.483345   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.984105   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:32.433032   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:34.928686   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.930503   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.920697   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921201   60628 main.go:141] libmachine: (no-preload-343495) Found IP for machine: 192.168.61.176
	I1212 21:10:36.921235   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has current primary IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921248   60628 main.go:141] libmachine: (no-preload-343495) Reserving static IP address...
	I1212 21:10:36.921719   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.921757   60628 main.go:141] libmachine: (no-preload-343495) DBG | skip adding static IP to network mk-no-preload-343495 - found existing host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"}
	I1212 21:10:36.921770   60628 main.go:141] libmachine: (no-preload-343495) Reserved static IP address: 192.168.61.176
	I1212 21:10:36.921785   60628 main.go:141] libmachine: (no-preload-343495) Waiting for SSH to be available...
	I1212 21:10:36.921802   60628 main.go:141] libmachine: (no-preload-343495) DBG | Getting to WaitForSSH function...
	I1212 21:10:36.924581   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.924908   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.924941   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.925154   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH client type: external
	I1212 21:10:36.925191   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa (-rw-------)
	I1212 21:10:36.925223   60628 main.go:141] libmachine: (no-preload-343495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:36.925234   60628 main.go:141] libmachine: (no-preload-343495) DBG | About to run SSH command:
	I1212 21:10:36.925246   60628 main.go:141] libmachine: (no-preload-343495) DBG | exit 0
	I1212 21:10:37.059619   60628 main.go:141] libmachine: (no-preload-343495) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:37.060017   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetConfigRaw
	I1212 21:10:37.060752   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.063599   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064325   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.064365   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064468   60628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/config.json ...
	I1212 21:10:37.064705   60628 machine.go:88] provisioning docker machine ...
	I1212 21:10:37.064733   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:37.064938   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065115   60628 buildroot.go:166] provisioning hostname "no-preload-343495"
	I1212 21:10:37.065144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065286   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.068118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068517   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.068548   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.068980   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069141   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069312   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.069507   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.069958   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.069985   60628 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-343495 && echo "no-preload-343495" | sudo tee /etc/hostname
	I1212 21:10:37.212905   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-343495
	
	I1212 21:10:37.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.215789   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216147   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.216182   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.216525   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216704   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216877   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.217037   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.217425   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.217444   60628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-343495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-343495/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-343495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:37.355687   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:37.355721   60628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:37.355754   60628 buildroot.go:174] setting up certificates
	I1212 21:10:37.355767   60628 provision.go:83] configureAuth start
	I1212 21:10:37.355780   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.356089   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.359197   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359644   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.359717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359937   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.362695   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363043   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.363079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363251   60628 provision.go:138] copyHostCerts
	I1212 21:10:37.363316   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:37.363336   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:37.363410   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:37.363536   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:37.363549   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:37.363585   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:37.363671   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:37.363677   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:37.363703   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:37.363757   60628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.no-preload-343495 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-343495]
	I1212 21:10:37.526121   60628 provision.go:172] copyRemoteCerts
	I1212 21:10:37.526205   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:37.526234   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.529079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529425   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.529492   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529659   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.529850   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.530009   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.530153   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:37.632384   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:37.661242   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:10:37.689215   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:10:37.714781   60628 provision.go:86] duration metric: configureAuth took 358.999712ms
	I1212 21:10:37.714819   60628 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:37.715040   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:10:37.715144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.718379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.718815   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.718844   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.719212   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.719422   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719625   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719789   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.719975   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.720484   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.720519   60628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:38.062630   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:38.062660   60628 machine.go:91] provisioned docker machine in 997.934774ms
	I1212 21:10:38.062673   60628 start.go:300] post-start starting for "no-preload-343495" (driver="kvm2")
	I1212 21:10:38.062687   60628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:38.062707   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.062999   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:38.063033   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.065898   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066299   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.066331   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066626   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.066878   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.067063   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.067228   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.164612   60628 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:38.170132   60628 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:38.170162   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:38.170244   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:38.170351   60628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:38.170467   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:38.181959   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:38.208734   60628 start.go:303] post-start completed in 146.045424ms
	I1212 21:10:38.208762   60628 fix.go:56] fixHost completed within 24.051421131s
	I1212 21:10:38.208782   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.212118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212519   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.212551   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212732   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213124   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213268   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.213436   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:38.213801   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:38.213827   60628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:38.337185   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415438.279018484
	
	I1212 21:10:38.337225   60628 fix.go:206] guest clock: 1702415438.279018484
	I1212 21:10:38.337239   60628 fix.go:219] Guest: 2023-12-12 21:10:38.279018484 +0000 UTC Remote: 2023-12-12 21:10:38.208766005 +0000 UTC m=+370.324656490 (delta=70.252479ms)
	I1212 21:10:38.337264   60628 fix.go:190] guest clock delta is within tolerance: 70.252479ms
	I1212 21:10:38.337275   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 24.179969571s
	I1212 21:10:38.337305   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.337527   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:38.340658   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341019   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.341053   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341233   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.341952   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342179   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342291   60628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:38.342336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.342388   60628 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:38.342413   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.345379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345419   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345762   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345809   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345841   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345864   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.346049   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346055   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346433   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346438   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346597   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.346596   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.467200   60628 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:38.475578   60628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:38.627838   60628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:38.634520   60628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:38.634614   60628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:38.654823   60628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:38.654847   60628 start.go:475] detecting cgroup driver to use...
	I1212 21:10:38.654928   60628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:38.673550   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:38.691252   60628 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:38.691318   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:38.707542   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:38.724686   60628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:38.843033   60628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:38.973535   60628 docker.go:219] disabling docker service ...
	I1212 21:10:38.973610   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:38.987940   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:39.001346   60628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:39.105401   60628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:39.209198   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:39.222268   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:39.243154   60628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:39.243226   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.253418   60628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:39.253497   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.263273   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.274546   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.284359   60628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:39.294828   60628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:39.304818   60628 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:39.304894   60628 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:39.318541   60628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:39.328819   60628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:39.439285   60628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:39.619385   60628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:39.619462   60628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:39.625279   60628 start.go:543] Will wait 60s for crictl version
	I1212 21:10:39.625358   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:39.630234   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:39.680505   60628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:39.680579   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.736272   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.796111   60628 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:10:39.732208   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.732243   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.732258   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.761735   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.761771   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.984129   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.990620   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:39.990650   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.484444   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.492006   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:40.492039   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.983459   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.990813   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:10:41.001024   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:10:41.001055   61298 api_server.go:131] duration metric: took 4.518922579s to wait for apiserver health ...
	I1212 21:10:41.001070   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:41.001078   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:41.003043   61298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:41.004669   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:41.084775   61298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:41.173688   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:41.201100   61298 system_pods.go:59] 9 kube-system pods found
	I1212 21:10:41.201132   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201140   61298 system_pods.go:61] "coredns-5dd5756b68-hc52p" [f8895d1e-3484-4ffe-9d11-f5e4b7617c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201148   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:10:41.201158   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:10:41.201165   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:10:41.201171   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:10:41.201177   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:10:41.201182   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:10:41.201187   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:10:41.201193   61298 system_pods.go:74] duration metric: took 27.476871ms to wait for pod list to return data ...
	I1212 21:10:41.201203   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:41.205597   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:41.205624   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:41.205638   61298 node_conditions.go:105] duration metric: took 4.431218ms to run NodePressure ...
	I1212 21:10:41.205653   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:41.516976   61298 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529555   61298 kubeadm.go:787] kubelet initialised
	I1212 21:10:41.529592   61298 kubeadm.go:788] duration metric: took 12.533051ms waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529601   61298 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:41.538991   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.546618   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546645   61298 pod_ready.go:81] duration metric: took 7.620954ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.546658   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546667   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.556921   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556951   61298 pod_ready.go:81] duration metric: took 10.273719ms waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.556963   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556972   61298 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.563538   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563570   61298 pod_ready.go:81] duration metric: took 6.584443ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.563586   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563598   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.578973   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579009   61298 pod_ready.go:81] duration metric: took 15.402148ms waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.579025   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579046   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.978938   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978972   61298 pod_ready.go:81] duration metric: took 399.914995ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.978990   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978999   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:38.930743   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:41.429587   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:39.798106   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:39.800962   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801364   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:39.801399   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801592   60628 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:39.806328   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:39.821949   60628 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:10:39.822014   60628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:39.873704   60628 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:10:39.873733   60628 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:10:39.873820   60628 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.873840   60628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.873859   60628 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:39.874021   60628 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.874062   60628 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 21:10:39.874043   60628 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.873836   60628 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.874359   60628 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.875369   60628 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875379   60628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.875390   60628 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 21:10:39.875428   60628 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.875284   60628 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.875803   60628 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.060906   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.061267   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.063065   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.074673   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 21:10:40.076082   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.080787   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.108962   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.169237   60628 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 21:10:40.169289   60628 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.169363   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.172419   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.251588   60628 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 21:10:40.251638   60628 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.251684   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.264051   60628 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 21:10:40.264146   60628 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.264227   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397546   60628 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 21:10:40.397590   60628 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.397640   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397669   60628 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 21:10:40.397709   60628 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.397774   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397876   60628 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 21:10:40.397978   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.398033   60628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:10:40.398064   60628 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.398079   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.398105   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397976   60628 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.398142   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.398143   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.418430   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.418500   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.530581   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530693   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.530781   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530584   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 21:10:40.530918   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:40.544770   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.544970   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 21:10:40.545108   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:40.567016   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567130   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567196   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.567297   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.604461   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 21:10:40.604484   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.604531   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 21:10:40.604488   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604644   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604590   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.612665   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 21:10:40.612741   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612794   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612800   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 21:10:40.612935   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:40.615786   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 21:10:42.378453   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378486   61298 pod_ready.go:81] duration metric: took 399.478547ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.378499   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378508   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:42.778834   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778871   61298 pod_ready.go:81] duration metric: took 400.345358ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.778887   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778897   61298 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:43.179851   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179879   61298 pod_ready.go:81] duration metric: took 400.97377ms waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:43.179891   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179898   61298 pod_ready.go:38] duration metric: took 1.6502873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:43.179913   61298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:10:43.196087   61298 ops.go:34] apiserver oom_adj: -16
	I1212 21:10:43.196114   61298 kubeadm.go:640] restartCluster took 20.701074763s
	I1212 21:10:43.196126   61298 kubeadm.go:406] StartCluster complete in 20.766085453s
	I1212 21:10:43.196146   61298 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.196225   61298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:10:43.198844   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.199122   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:10:43.199268   61298 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:10:43.199342   61298 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199363   61298 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199372   61298 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:10:43.199396   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:43.199456   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199373   61298 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199492   61298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-171828"
	I1212 21:10:43.199389   61298 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199551   61298 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199568   61298 addons.go:240] addon metrics-server should already be in state true
	I1212 21:10:43.199637   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199891   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199915   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.199922   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199945   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.200148   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.200177   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.218067   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I1212 21:10:43.218679   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1212 21:10:43.218817   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219111   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219234   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1212 21:10:43.219356   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219372   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219590   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219607   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219699   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219807   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220061   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220258   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.220278   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.220324   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.220436   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.220488   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.220676   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.221418   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.221444   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.224718   61298 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.224742   61298 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:10:43.224769   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.225189   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.225227   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.225431   61298 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-171828" context rescaled to 1 replicas
	I1212 21:10:43.225467   61298 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:10:43.228523   61298 out.go:177] * Verifying Kubernetes components...
	I1212 21:10:43.230002   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:10:43.239165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1212 21:10:43.239749   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.240357   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.240383   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.240761   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.240937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.241446   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1212 21:10:43.241951   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.242522   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.242541   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.242864   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.242931   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.244753   61298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:43.243219   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.246309   61298 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.246332   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:10:43.246358   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.248809   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.250840   61298 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:10:43.252430   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:10:43.251041   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.250309   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.247068   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I1212 21:10:43.252596   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:10:43.252622   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.252718   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.252745   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.253368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.253677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.253846   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.254434   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.259686   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.259697   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.259748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259844   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.259883   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.259973   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.260149   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.260361   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.260420   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.261546   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.261594   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.284357   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I1212 21:10:43.284945   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.285431   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.285444   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.286009   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.286222   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.288257   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.288542   61298 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.288565   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:10:43.288586   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.291842   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.292527   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.292680   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.293076   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.293350   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.293512   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.293683   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.405154   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.426115   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:10:43.426141   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:10:43.486953   61298 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:10:43.486975   61298 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:43.491689   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:10:43.491709   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:10:43.505611   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.538745   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:43.538785   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:10:43.600598   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:44.933368   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.528176624s)
	I1212 21:10:44.933442   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.427784857s)
	I1212 21:10:44.933493   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933511   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933539   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332913009s)
	I1212 21:10:44.933496   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933559   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933566   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933569   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933926   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.933943   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933944   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.933955   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933964   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933974   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934081   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934096   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934118   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934120   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934127   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934132   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934138   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934156   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934372   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934397   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934401   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934808   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934845   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934858   61298 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-171828"
	I1212 21:10:44.937727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.937783   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.937806   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.945948   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.945966   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.946202   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.946220   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.949385   61298 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1212 21:10:43.688668   60948 retry.go:31] will retry after 13.919612963s: kubelet not initialised
	I1212 21:10:44.951009   61298 addons.go:502] enable addons completed in 1.751742212s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1212 21:10:45.583280   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.432062   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:45.929995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.305027541s)
	I1212 21:10:43.909740   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.296738263s)
	I1212 21:10:43.909764   60628 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:43.909770   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 21:10:43.909810   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:45.879475   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969630074s)
	I1212 21:10:45.879502   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 21:10:45.879527   60628 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:45.879592   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:47.584004   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:50.113807   61298 node_ready.go:49] node "default-k8s-diff-port-171828" has status "Ready":"True"
	I1212 21:10:50.113837   61298 node_ready.go:38] duration metric: took 6.626786171s waiting for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:50.113850   61298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:50.128903   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656130   61298 pod_ready.go:92] pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:50.656153   61298 pod_ready.go:81] duration metric: took 527.212389ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656161   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:47.931716   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.433176   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.267864   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.388242252s)
	I1212 21:10:50.267898   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 21:10:50.267931   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:50.267977   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:52.845895   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.577890173s)
	I1212 21:10:52.845935   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 21:10:52.845969   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.846023   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.677971   61298 pod_ready.go:102] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:53.179154   61298 pod_ready.go:92] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.179186   61298 pod_ready.go:81] duration metric: took 2.523018353s waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.179200   61298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185649   61298 pod_ready.go:92] pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.185673   61298 pod_ready.go:81] duration metric: took 6.463925ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185685   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193280   61298 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.193303   61298 pod_ready.go:81] duration metric: took 1.00761061s waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193313   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484196   61298 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.484223   61298 pod_ready.go:81] duration metric: took 290.902142ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484240   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883746   61298 pod_ready.go:92] pod "kube-proxy-47qmb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.883773   61298 pod_ready.go:81] duration metric: took 399.524854ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883784   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283637   61298 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:55.283670   61298 pod_ready.go:81] duration metric: took 399.871874ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283684   61298 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:52.931372   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.932174   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.204367   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.358317317s)
	I1212 21:10:54.204393   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 21:10:54.204425   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:54.204485   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:56.066774   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.862261726s)
	I1212 21:10:56.066802   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 21:10:56.066825   60628 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:56.066874   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:57.118959   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.052055479s)
	I1212 21:10:57.118985   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 21:10:57.119009   60628 cache_images.go:123] Successfully loaded all cached images
	I1212 21:10:57.119021   60628 cache_images.go:92] LoadImages completed in 17.245274715s
	I1212 21:10:57.119103   60628 ssh_runner.go:195] Run: crio config
	I1212 21:10:57.180068   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:10:57.180093   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:57.180109   60628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:57.180127   60628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-343495 NodeName:no-preload-343495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:57.180250   60628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-343495"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:57.180330   60628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-343495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:10:57.180382   60628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:10:57.191949   60628 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:57.192034   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:57.202921   60628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 21:10:57.219512   60628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:10:57.235287   60628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1212 21:10:57.252278   60628 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:57.256511   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:57.268744   60628 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495 for IP: 192.168.61.176
	I1212 21:10:57.268781   60628 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:57.268959   60628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:57.269032   60628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:57.269133   60628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/client.key
	I1212 21:10:57.269228   60628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key.492ad1cf
	I1212 21:10:57.269316   60628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key
	I1212 21:10:57.269466   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:57.269511   60628 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:57.269526   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:57.269562   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:57.269597   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:57.269629   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:57.269685   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:57.270311   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:57.295960   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:10:57.320157   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:57.344434   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:57.368906   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:57.391830   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:57.415954   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:57.441182   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:57.465055   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:57.489788   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:57.513828   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:57.536138   60628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:57.553168   60628 ssh_runner.go:195] Run: openssl version
	I1212 21:10:57.558771   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:57.570141   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574935   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574990   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.580985   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:57.592528   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:57.603477   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608448   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608511   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.614316   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:57.625667   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:57.637284   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642258   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642323   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.648072   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:57.659762   60628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:57.664517   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:57.670385   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:57.676336   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:57.682074   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:57.688387   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:57.694542   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:57.700400   60628 kubeadm.go:404] StartCluster: {Name:no-preload-343495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:57.700520   60628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:57.700576   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:57.738703   60628 cri.go:89] found id: ""
	I1212 21:10:57.738776   60628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:57.749512   60628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:57.749538   60628 kubeadm.go:636] restartCluster start
	I1212 21:10:57.749610   60628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:57.758905   60628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.760000   60628 kubeconfig.go:92] found "no-preload-343495" server: "https://192.168.61.176:8443"
	I1212 21:10:57.762219   60628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:57.773107   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.773181   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.785478   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.785500   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.785554   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.797412   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.613799   60948 retry.go:31] will retry after 13.009137494s: kubelet not initialised
	I1212 21:10:57.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.591232   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:02.093666   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:57.429861   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.429944   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:01.438267   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:58.297630   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.297712   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.312155   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:58.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.797652   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.809726   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.297677   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.309875   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.798441   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.798531   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.810533   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.298154   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.298237   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.310050   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.797683   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.809712   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.298094   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.298224   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.310181   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.797635   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.797742   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.809336   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.297912   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.297997   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.309215   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.797666   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.797749   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.808815   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.590426   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.590850   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.929977   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.429697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.297975   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.298066   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.308865   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:03.798103   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.798207   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.809553   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.297580   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.297653   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.309100   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.797646   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.797767   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.809269   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.297665   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.309281   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.797809   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.797898   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.809794   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.298381   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.298497   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.309467   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.798050   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.798132   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.809758   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.298354   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:07.298434   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:07.309655   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.773157   60628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:11:07.773216   60628 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:11:07.773229   60628 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:11:07.773290   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:11:07.815986   60628 cri.go:89] found id: ""
	I1212 21:11:07.816068   60628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:11:07.832950   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:11:07.842287   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:11:07.842353   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851694   60628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851720   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:10.630075   60948 kubeadm.go:787] kubelet initialised
	I1212 21:11:10.630105   60948 kubeadm.go:788] duration metric: took 47.146743334s waiting for restarted kubelet to initialise ...
	I1212 21:11:10.630116   60948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:10.637891   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644674   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.644700   60948 pod_ready.go:81] duration metric: took 6.771094ms waiting for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644710   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651801   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.651830   60948 pod_ready.go:81] duration metric: took 7.112566ms waiting for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651845   60948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659678   60948 pod_ready.go:92] pod "etcd-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.659700   60948 pod_ready.go:81] duration metric: took 7.845111ms waiting for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659711   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665929   60948 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.665958   60948 pod_ready.go:81] duration metric: took 6.237833ms waiting for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665972   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028938   60948 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.028961   60948 pod_ready.go:81] duration metric: took 362.981718ms waiting for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028973   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428824   60948 pod_ready.go:92] pod "kube-proxy-5mvzb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.428853   60948 pod_ready.go:81] duration metric: took 399.87314ms waiting for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428866   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828546   60948 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.828578   60948 pod_ready.go:81] duration metric: took 399.696769ms waiting for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828590   60948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:09.094309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:11.098257   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:08.928635   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:10.929896   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:07.988857   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.772924   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.980401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.108938   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.189716   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:11:09.189780   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.201432   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.722085   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.222325   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.721931   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.222186   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.721642   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.745977   60628 api_server.go:72] duration metric: took 2.556260463s to wait for apiserver process to appear ...
	I1212 21:11:11.746005   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:11:11.746025   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:14.135897   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.138482   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:13.590920   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.591230   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:12.931314   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.429327   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.294367   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.294401   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.294413   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.347744   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.347780   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.848435   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.853773   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:16.853823   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.348312   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.359543   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.359579   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.848425   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.853966   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.854006   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:18.348644   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:18.373028   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:11:18.385301   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:11:18.385341   60628 api_server.go:131] duration metric: took 6.639327054s to wait for apiserver health ...
	I1212 21:11:18.385353   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:11:18.385362   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:11:18.387289   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:11:18.636422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:20.636472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.592197   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.593157   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:21.594049   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.434254   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.930697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:18.388998   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:11:18.449634   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:11:18.491001   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:11:18.517694   60628 system_pods.go:59] 8 kube-system pods found
	I1212 21:11:18.517729   60628 system_pods.go:61] "coredns-76f75df574-s9jgn" [b13d32b4-a44b-4f79-bece-d0adafef4c7c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:11:18.517740   60628 system_pods.go:61] "etcd-no-preload-343495" [ad48db04-9c79-48e9-a001-1a9061c43cb9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:11:18.517754   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [24d024c1-a89f-4ede-8dbf-7502f0179cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:11:18.517760   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [10ce49e3-2679-4ac5-89aa-9179582ae778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:11:18.517765   60628 system_pods.go:61] "kube-proxy-492l6" [3a2bbe46-0506-490f-aae8-a97e48f3205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:11:18.517773   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [bca80470-c204-4a34-9c7d-5de3ad382c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:11:18.517778   60628 system_pods.go:61] "metrics-server-57f55c9bc5-tmmk4" [11066021-353e-418e-9c7f-78e72dae44a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:11:18.517785   60628 system_pods.go:61] "storage-provisioner" [e681d4cd-f2f6-4cf3-ba09-0f361a64aafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:11:18.517794   60628 system_pods.go:74] duration metric: took 26.756848ms to wait for pod list to return data ...
	I1212 21:11:18.517815   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:11:18.521330   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:11:18.521362   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:11:18.521377   60628 node_conditions.go:105] duration metric: took 3.557177ms to run NodePressure ...
	I1212 21:11:18.521401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:18.945267   60628 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958848   60628 kubeadm.go:787] kubelet initialised
	I1212 21:11:18.958877   60628 kubeadm.go:788] duration metric: took 13.578451ms waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958886   60628 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:18.964819   60628 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:20.987111   60628 pod_ready.go:102] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.494268   60628 pod_ready.go:92] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:22.494299   60628 pod_ready.go:81] duration metric: took 3.529452237s waiting for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:22.494311   60628 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:23.136140   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:25.635800   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.093215   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.590861   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.429921   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.928565   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.929668   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.514490   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.013783   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.637165   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:30.133948   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.091057   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.598428   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:28.930654   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.428436   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.514918   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.514945   60628 pod_ready.go:81] duration metric: took 7.020626508s waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.514955   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524669   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.524696   60628 pod_ready.go:81] duration metric: took 9.734059ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524709   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541808   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.541830   60628 pod_ready.go:81] duration metric: took 17.113672ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541839   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553955   60628 pod_ready.go:92] pod "kube-proxy-492l6" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.553979   60628 pod_ready.go:81] duration metric: took 12.134143ms waiting for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553988   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562798   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.562835   60628 pod_ready.go:81] duration metric: took 8.836628ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562850   60628 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:31.818614   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:32.134558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.135376   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.634429   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.090158   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.091290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.429336   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:35.430448   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.819222   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.318847   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.637527   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:41.134980   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.115262   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.591502   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:37.929700   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:39.929830   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.318911   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.319619   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.319750   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.135558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.635174   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.090309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.590529   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.434126   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.931810   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.818997   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.321699   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.635472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.636294   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.640471   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.590577   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.590885   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.591122   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.429836   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.431518   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.928631   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.823419   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:52.319752   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.137390   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.634152   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.593196   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.089777   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.929750   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:55.932860   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.321554   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.819877   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.136605   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.092816   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.591682   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.429543   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.432255   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:59.318053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.325068   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.137023   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.635397   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.091397   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.094195   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:02.933370   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.430020   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.819751   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:06.319806   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.137648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.635154   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.591471   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.091503   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.430684   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:09.929393   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.319984   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.821053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.637206   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.136850   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.590992   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.591391   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.591744   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.429299   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.429724   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.430114   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:13.329939   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.820117   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.820519   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.199675   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.635179   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.635426   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.091628   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.091739   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:18.929340   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.929933   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.319134   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.819399   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:24.133408   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:26.134293   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:23.093543   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.591828   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.930710   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.434148   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.319949   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.337078   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.134422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.137461   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.090730   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.092555   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.928685   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.929200   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.929272   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.819461   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.819541   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.633893   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.636198   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.636373   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.590019   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.590953   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.591420   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.929488   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:35.929671   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.819661   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.322177   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.137315   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.635168   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.097607   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.590836   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:37.930820   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.930916   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:38.324332   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:40.819395   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.819784   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.640489   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.134648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.590910   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.592083   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.429717   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:44.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.431053   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.320122   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:47.819547   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.135328   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.137213   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.091979   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.093149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.929529   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:51.428177   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.319560   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.820242   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.635136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.637000   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.591430   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.090634   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:53.429307   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.429455   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.821647   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.319971   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.135608   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.137606   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.634197   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.590565   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:00.091074   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.429785   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.928834   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.818255   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.819526   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:03.635008   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.134591   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.591023   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.592260   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.092331   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.430411   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.930385   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.326885   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.822828   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:08.135379   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:10.136957   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.590114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.593478   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.434219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.929736   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.930477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.322955   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.819793   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:12.137554   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.635349   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.637857   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.092558   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.591772   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.429362   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.931219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.319867   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.325224   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.135196   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.634789   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.090842   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.591235   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.929464   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:18.326463   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:20.819839   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:22.820060   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.636879   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.135188   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.591676   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.591833   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.929811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.429286   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.319356   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.819668   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.634130   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.635441   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.591961   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.090560   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:32.091429   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.929344   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.929561   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:29.820548   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:31.820901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.134798   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.635317   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.094290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.589895   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.429811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.429995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.319447   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.822690   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.636833   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.136281   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:38.591586   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.090302   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.929337   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.428532   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:39.321656   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.820917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.635037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.135037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:43.091587   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.590322   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.429616   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.430483   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.431960   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.319403   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.326448   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.136136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:49.635013   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.635308   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.592114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:50.089825   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:52.090721   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.928619   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.429031   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.820121   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.319794   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.134872   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:54.589746   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.590432   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.429817   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:55.929211   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.820666   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.322986   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.135622   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.139553   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.592602   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:01.091154   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:57.929777   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:59.930300   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.818901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.819587   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.634488   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.636059   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:03.591886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:06.091886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.432472   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.125384   60833 pod_ready.go:81] duration metric: took 4m0.000960425s waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:05.125428   60833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:05.125437   60833 pod_ready.go:38] duration metric: took 4m2.799403108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:05.125453   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:05.125518   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:05.125592   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:05.203017   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:05.203045   60833 cri.go:89] found id: ""
	I1212 21:14:05.203054   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:05.203115   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.208622   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:05.208693   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:05.250079   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.250102   60833 cri.go:89] found id: ""
	I1212 21:14:05.250118   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:05.250161   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.254870   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:05.254946   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:05.323718   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:05.323748   60833 cri.go:89] found id: ""
	I1212 21:14:05.323757   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:05.323819   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.328832   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:05.328902   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:05.372224   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.372252   60833 cri.go:89] found id: ""
	I1212 21:14:05.372262   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:05.372316   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.377943   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:05.378007   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:05.417867   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:05.417894   60833 cri.go:89] found id: ""
	I1212 21:14:05.417905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:05.417961   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.422198   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:05.422264   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:05.462031   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:05.462052   60833 cri.go:89] found id: ""
	I1212 21:14:05.462059   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:05.462114   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.466907   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:05.466962   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:05.512557   60833 cri.go:89] found id: ""
	I1212 21:14:05.512585   60833 logs.go:284] 0 containers: []
	W1212 21:14:05.512592   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:05.512597   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:05.512663   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:05.553889   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.553914   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:05.553921   60833 cri.go:89] found id: ""
	I1212 21:14:05.553929   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:05.553982   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.558864   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.563550   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:05.563572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:05.627093   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:05.627135   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:05.642800   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:05.642827   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:05.820642   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:05.820683   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.871256   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:05.871299   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.913399   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:05.913431   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.955061   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:05.955103   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:06.012639   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:06.012681   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:06.057933   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:06.057970   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:06.110367   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:06.110400   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:06.173711   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:06.173746   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:06.214291   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:06.214328   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:06.260105   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:06.260142   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:03.320010   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.321011   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.821313   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.134137   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.635405   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:08.591048   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:10.593286   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.219373   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:09.237985   60833 api_server.go:72] duration metric: took 4m14.403294004s to wait for apiserver process to appear ...
	I1212 21:14:09.238014   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:09.238057   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:09.238119   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:09.281005   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:09.281028   60833 cri.go:89] found id: ""
	I1212 21:14:09.281037   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:09.281097   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.285354   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:09.285436   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:09.336833   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.336864   60833 cri.go:89] found id: ""
	I1212 21:14:09.336874   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:09.336937   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.342850   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:09.342928   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:09.387107   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.387133   60833 cri.go:89] found id: ""
	I1212 21:14:09.387143   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:09.387202   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.392729   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:09.392806   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:09.433197   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:09.433225   60833 cri.go:89] found id: ""
	I1212 21:14:09.433232   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:09.433281   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.438043   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:09.438092   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:09.486158   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.486185   60833 cri.go:89] found id: ""
	I1212 21:14:09.486200   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:09.486255   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.491667   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:09.491735   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:09.536085   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:09.536108   60833 cri.go:89] found id: ""
	I1212 21:14:09.536114   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:09.536165   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.540939   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:09.541008   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:09.585160   60833 cri.go:89] found id: ""
	I1212 21:14:09.585187   60833 logs.go:284] 0 containers: []
	W1212 21:14:09.585195   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:09.585200   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:09.585254   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:09.628972   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:09.629001   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:09.629008   60833 cri.go:89] found id: ""
	I1212 21:14:09.629017   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:09.629075   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.634242   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.639308   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:09.639344   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:09.766299   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:09.766329   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.816655   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:09.816699   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.863184   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:09.863212   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.924345   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:09.924382   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:10.363852   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:10.363897   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:10.417375   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:10.417407   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:10.432758   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:10.432788   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:10.483732   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:10.483778   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:10.538254   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:10.538283   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:10.598142   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:10.598174   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:10.650678   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:10.650710   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:10.697971   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:10.698000   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:10.318636   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.321917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.134600   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:14.134822   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.634845   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.091008   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:15.589901   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.241720   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:14:13.248465   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:14:13.249814   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:14:13.249839   60833 api_server.go:131] duration metric: took 4.011816395s to wait for apiserver health ...
	I1212 21:14:13.249848   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:14:13.249871   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:13.249916   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:13.300138   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:13.300161   60833 cri.go:89] found id: ""
	I1212 21:14:13.300171   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:13.300228   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.306350   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:13.306424   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:13.358644   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.358667   60833 cri.go:89] found id: ""
	I1212 21:14:13.358676   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:13.358737   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.363921   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:13.363989   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:13.413339   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:13.413366   60833 cri.go:89] found id: ""
	I1212 21:14:13.413374   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:13.413420   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.418188   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:13.418248   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:13.461495   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:13.461522   60833 cri.go:89] found id: ""
	I1212 21:14:13.461532   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:13.461581   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.465878   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:13.465951   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:13.511866   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.511895   60833 cri.go:89] found id: ""
	I1212 21:14:13.511905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:13.511960   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.516312   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:13.516381   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:13.560993   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.561023   60833 cri.go:89] found id: ""
	I1212 21:14:13.561034   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:13.561092   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.565439   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:13.565514   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:13.608401   60833 cri.go:89] found id: ""
	I1212 21:14:13.608434   60833 logs.go:284] 0 containers: []
	W1212 21:14:13.608445   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:13.608452   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:13.608507   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:13.661929   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:13.661956   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:13.661963   60833 cri.go:89] found id: ""
	I1212 21:14:13.661972   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:13.662036   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.667039   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.671770   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:13.671791   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:13.793637   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:13.793671   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.844253   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:13.844286   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.886965   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:13.886997   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.946537   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:13.946572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:13.999732   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:13.999769   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:14.015819   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:14.015849   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:14.063649   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:14.063684   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:14.116465   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:14.116499   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:14.179838   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:14.179875   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:14.224213   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:14.224243   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:14.262832   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:14.262858   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:14.307981   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:14.308008   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:17.188864   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:14:17.188919   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.188927   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.188934   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.188943   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.188950   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.188959   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.188980   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.188988   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.188996   60833 system_pods.go:74] duration metric: took 3.939142839s to wait for pod list to return data ...
	I1212 21:14:17.189005   60833 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:14:17.192352   60833 default_sa.go:45] found service account: "default"
	I1212 21:14:17.192390   60833 default_sa.go:55] duration metric: took 3.37914ms for default service account to be created ...
	I1212 21:14:17.192400   60833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:14:17.198396   60833 system_pods.go:86] 8 kube-system pods found
	I1212 21:14:17.198424   60833 system_pods.go:89] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.198429   60833 system_pods.go:89] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.198433   60833 system_pods.go:89] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.198438   60833 system_pods.go:89] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.198442   60833 system_pods.go:89] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.198446   60833 system_pods.go:89] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.198455   60833 system_pods.go:89] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.198459   60833 system_pods.go:89] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.198466   60833 system_pods.go:126] duration metric: took 6.060971ms to wait for k8s-apps to be running ...
	I1212 21:14:17.198473   60833 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:14:17.198513   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:14:17.217190   60833 system_svc.go:56] duration metric: took 18.71037ms WaitForService to wait for kubelet.
	I1212 21:14:17.217224   60833 kubeadm.go:581] duration metric: took 4m22.382539055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:14:17.217249   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:14:17.221504   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:14:17.221540   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:14:17.221555   60833 node_conditions.go:105] duration metric: took 4.300742ms to run NodePressure ...
	I1212 21:14:17.221569   60833 start.go:228] waiting for startup goroutines ...
	I1212 21:14:17.221577   60833 start.go:233] waiting for cluster config update ...
	I1212 21:14:17.221594   60833 start.go:242] writing updated cluster config ...
	I1212 21:14:17.221939   60833 ssh_runner.go:195] Run: rm -f paused
	I1212 21:14:17.277033   60833 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:14:17.279044   60833 out.go:177] * Done! kubectl is now configured to use "embed-certs-831188" cluster and "default" namespace by default
	I1212 21:14:14.818262   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.823731   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:18.634990   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.135517   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:17.593149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:20.091419   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:22.091781   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:19.320462   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.819129   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.636400   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.134084   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:24.591552   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:27.090974   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.825879   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.318691   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.135741   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:30.635812   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:29.091882   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.590150   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.819815   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.319140   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.134738   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:35.637961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.591929   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.091976   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.819872   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.325409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.139066   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.635659   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:41.090674   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.819966   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.820281   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.135071   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.635762   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.091695   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.595126   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.323343   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.819822   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.134846   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.135229   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.092328   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.591470   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.319483   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.819702   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.135550   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:54.634163   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:56.634961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.593957   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.091338   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.284411   61298 pod_ready.go:81] duration metric: took 4m0.000712263s waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:55.284453   61298 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:55.284462   61298 pod_ready.go:38] duration metric: took 4m5.170596318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:55.284486   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:55.284536   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:55.284595   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:55.345012   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:55.345043   61298 cri.go:89] found id: ""
	I1212 21:14:55.345055   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:55.345118   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.350261   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:55.350329   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:55.403088   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:55.403116   61298 cri.go:89] found id: ""
	I1212 21:14:55.403124   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:55.403169   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.408043   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:55.408103   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:55.449581   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:55.449608   61298 cri.go:89] found id: ""
	I1212 21:14:55.449615   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:55.449670   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.454762   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:55.454828   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:55.502919   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:55.502960   61298 cri.go:89] found id: ""
	I1212 21:14:55.502970   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:55.503050   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.508035   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:55.508101   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:55.550335   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:55.550365   61298 cri.go:89] found id: ""
	I1212 21:14:55.550376   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:55.550437   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.554985   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:55.555043   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:55.599678   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:55.599707   61298 cri.go:89] found id: ""
	I1212 21:14:55.599716   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:55.599772   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.604830   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:55.604913   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:55.651733   61298 cri.go:89] found id: ""
	I1212 21:14:55.651767   61298 logs.go:284] 0 containers: []
	W1212 21:14:55.651774   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:55.651779   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:55.651825   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:55.690691   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:55.690716   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.690723   61298 cri.go:89] found id: ""
	I1212 21:14:55.690732   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:55.690778   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.695227   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.699700   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:14:55.699723   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:55.751176   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:14:55.751210   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.789388   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:55.789417   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:56.270250   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:56.270296   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:56.315517   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:56.315549   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:56.377591   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:14:56.377648   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:56.432089   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:56.432124   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:56.496004   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:56.496038   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:56.543979   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:56.544010   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:56.599613   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:14:56.599644   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:56.646113   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:14:56.646146   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:56.693081   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:56.693111   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:56.709557   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:56.709591   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:53.319742   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.320811   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:57.820478   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.134092   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:01.135233   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.366965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:59.385251   61298 api_server.go:72] duration metric: took 4m16.159743319s to wait for apiserver process to appear ...
	I1212 21:14:59.385280   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:59.385317   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:59.385365   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:59.433011   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:59.433038   61298 cri.go:89] found id: ""
	I1212 21:14:59.433047   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:59.433088   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.438059   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:59.438136   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:59.477000   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.477078   61298 cri.go:89] found id: ""
	I1212 21:14:59.477093   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:59.477153   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.481716   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:59.481791   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:59.526936   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.526966   61298 cri.go:89] found id: ""
	I1212 21:14:59.526975   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:59.527037   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.535907   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:59.535985   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:59.580818   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:59.580848   61298 cri.go:89] found id: ""
	I1212 21:14:59.580856   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:59.580916   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.585685   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:59.585733   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:59.640697   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:59.640721   61298 cri.go:89] found id: ""
	I1212 21:14:59.640731   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:59.640798   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.644940   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:59.645004   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:59.687873   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.687901   61298 cri.go:89] found id: ""
	I1212 21:14:59.687910   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:59.687960   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.692382   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:59.692442   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:59.735189   61298 cri.go:89] found id: ""
	I1212 21:14:59.735225   61298 logs.go:284] 0 containers: []
	W1212 21:14:59.735235   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:59.735256   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:59.735323   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:59.778668   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:59.778702   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:59.778708   61298 cri.go:89] found id: ""
	I1212 21:14:59.778717   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:59.778773   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.782827   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.787277   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:59.787303   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:59.802470   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:59.802499   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.864191   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:59.864225   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.911007   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:59.911032   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.975894   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:59.975932   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:00.021750   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:00.021780   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:00.061527   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:00.061557   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:00.484318   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:00.484359   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:00.549321   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:00.549357   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:00.600589   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:00.600629   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:00.643660   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:00.643690   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:00.698016   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:00.698047   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:00.741819   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:00.741850   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:00.319685   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:02.320017   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.136545   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:05.635632   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.383318   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:15:03.389750   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:15:03.391084   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:15:03.391117   61298 api_server.go:131] duration metric: took 4.005829911s to wait for apiserver health ...
	I1212 21:15:03.391155   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:03.391181   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:15:03.391262   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:15:03.438733   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:03.438754   61298 cri.go:89] found id: ""
	I1212 21:15:03.438762   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:15:03.438809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.443133   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:15:03.443203   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:15:03.488960   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.488990   61298 cri.go:89] found id: ""
	I1212 21:15:03.489001   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:15:03.489058   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.493741   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:15:03.493807   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:15:03.541286   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:03.541316   61298 cri.go:89] found id: ""
	I1212 21:15:03.541325   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:15:03.541387   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.545934   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:15:03.546008   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:15:03.585937   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.585962   61298 cri.go:89] found id: ""
	I1212 21:15:03.585971   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:15:03.586039   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.590444   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:15:03.590516   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:15:03.626793   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:03.626826   61298 cri.go:89] found id: ""
	I1212 21:15:03.626835   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:15:03.626894   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.631829   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:15:03.631906   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:15:03.676728   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:03.676750   61298 cri.go:89] found id: ""
	I1212 21:15:03.676758   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:15:03.676809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.681068   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:15:03.681123   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:15:03.723403   61298 cri.go:89] found id: ""
	I1212 21:15:03.723430   61298 logs.go:284] 0 containers: []
	W1212 21:15:03.723437   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:15:03.723442   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:15:03.723502   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:15:03.772837   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.772868   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.772875   61298 cri.go:89] found id: ""
	I1212 21:15:03.772884   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:15:03.772940   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.777274   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.782354   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:15:03.782379   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.823892   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:03.823919   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.866656   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:15:03.866689   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.920757   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:03.920798   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.963737   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:03.963766   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:04.005559   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:15:04.005582   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:04.054868   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:04.054901   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:04.118941   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:04.118973   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:04.188272   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:15:04.188314   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:04.230473   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:15:04.230502   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:15:04.247069   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:04.247097   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:04.425844   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:04.425877   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:04.492751   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:04.492789   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:07.374768   61298 system_pods.go:59] 8 kube-system pods found
	I1212 21:15:07.374796   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.374801   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.374806   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.374810   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.374814   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.374818   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.374823   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.374828   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.374835   61298 system_pods.go:74] duration metric: took 3.983674471s to wait for pod list to return data ...
	I1212 21:15:07.374842   61298 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:07.377370   61298 default_sa.go:45] found service account: "default"
	I1212 21:15:07.377391   61298 default_sa.go:55] duration metric: took 2.542814ms for default service account to be created ...
	I1212 21:15:07.377400   61298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:07.384723   61298 system_pods.go:86] 8 kube-system pods found
	I1212 21:15:07.384751   61298 system_pods.go:89] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.384758   61298 system_pods.go:89] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.384767   61298 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.384776   61298 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.384782   61298 system_pods.go:89] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.384789   61298 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.384800   61298 system_pods.go:89] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.384809   61298 system_pods.go:89] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.384824   61298 system_pods.go:126] duration metric: took 7.416446ms to wait for k8s-apps to be running ...
	I1212 21:15:07.384838   61298 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:15:07.384893   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:07.402316   61298 system_svc.go:56] duration metric: took 17.466449ms WaitForService to wait for kubelet.
	I1212 21:15:07.402350   61298 kubeadm.go:581] duration metric: took 4m24.176848962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:15:07.402367   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:15:07.405566   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:15:07.405598   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:15:07.405616   61298 node_conditions.go:105] duration metric: took 3.244651ms to run NodePressure ...
	I1212 21:15:07.405628   61298 start.go:228] waiting for startup goroutines ...
	I1212 21:15:07.405637   61298 start.go:233] waiting for cluster config update ...
	I1212 21:15:07.405649   61298 start.go:242] writing updated cluster config ...
	I1212 21:15:07.405956   61298 ssh_runner.go:195] Run: rm -f paused
	I1212 21:15:07.457339   61298 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:15:07.459492   61298 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-171828" cluster and "default" namespace by default
	I1212 21:15:04.820409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:07.323802   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:08.135943   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:10.633863   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:11.829177   60948 pod_ready.go:81] duration metric: took 4m0.000566874s waiting for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:11.829231   60948 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:11.829268   60948 pod_ready.go:38] duration metric: took 4m1.1991406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:11.829314   60948 kubeadm.go:640] restartCluster took 5m11.909727716s
	W1212 21:15:11.829387   60948 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:11.829425   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:09.824487   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:12.319761   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:14.818898   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:16.822843   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:18.398899   60948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.569443116s)
	I1212 21:15:18.398988   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:18.421423   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:18.437661   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:18.459692   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:18.459747   60948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 21:15:18.529408   60948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1212 21:15:18.529485   60948 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:18.690865   60948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:18.691034   60948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:18.691165   60948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:18.939806   60948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:18.939966   60948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:18.949943   60948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1212 21:15:19.070931   60948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:19.072676   60948 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:19.072783   60948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:19.072868   60948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:19.072976   60948 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:19.073053   60948 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:19.073145   60948 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:19.073253   60948 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:19.073367   60948 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:19.073461   60948 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:19.073562   60948 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:19.073669   60948 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:19.073732   60948 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:19.073833   60948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:19.136565   60948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:19.614416   60948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:19.754535   60948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:20.149412   60948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:20.150707   60948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:20.152444   60948 out.go:204]   - Booting up control plane ...
	I1212 21:15:20.152579   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:20.158445   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:20.162012   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:20.162125   60948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:20.163852   60948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:19.321950   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:21.334725   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:23.820711   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:26.320918   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.174689   60948 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.007313 seconds
	I1212 21:15:29.174814   60948 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:29.189641   60948 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:29.715080   60948 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:29.715312   60948 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-372099 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 21:15:30.225103   60948 kubeadm.go:322] [bootstrap-token] Using token: h843b5.c34afz2u52stqeoc
	I1212 21:15:30.226707   60948 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:30.226873   60948 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:30.237412   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:30.245755   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:30.252764   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:30.259184   60948 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:30.405726   60948 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:30.647756   60948 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:30.647812   60948 kubeadm.go:322] 
	I1212 21:15:30.647908   60948 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:30.647920   60948 kubeadm.go:322] 
	I1212 21:15:30.648030   60948 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:30.648040   60948 kubeadm.go:322] 
	I1212 21:15:30.648076   60948 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:30.648155   60948 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:30.648219   60948 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:30.648229   60948 kubeadm.go:322] 
	I1212 21:15:30.648358   60948 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:30.648477   60948 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:30.648571   60948 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:30.648582   60948 kubeadm.go:322] 
	I1212 21:15:30.648698   60948 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1212 21:15:30.648813   60948 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:30.648824   60948 kubeadm.go:322] 
	I1212 21:15:30.648920   60948 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649052   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:30.649101   60948 kubeadm.go:322]     --control-plane 	  
	I1212 21:15:30.649111   60948 kubeadm.go:322] 
	I1212 21:15:30.649205   60948 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:30.649214   60948 kubeadm.go:322] 
	I1212 21:15:30.649313   60948 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649435   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:30.649933   60948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:30.649961   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:15:30.649971   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:30.651531   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:30.652689   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:30.663574   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:30.686618   60948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:30.686690   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.686692   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=old-k8s-version-372099 minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.707974   60948 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:30.909886   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.037212   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.641453   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:28.819896   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.562965   60628 pod_ready.go:81] duration metric: took 4m0.000097626s waiting for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:29.563010   60628 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:29.563041   60628 pod_ready.go:38] duration metric: took 4m10.604144973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:29.563066   60628 kubeadm.go:640] restartCluster took 4m31.813522594s
	W1212 21:15:29.563127   60628 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:29.563156   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:32.141066   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:32.640787   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.140569   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.640785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.140535   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.641063   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.140492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.640819   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.140748   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.640647   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.141492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.641109   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.140524   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.641401   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.141549   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.641304   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.141537   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.641149   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.141391   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.640949   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.000355   60628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.437170953s)
	I1212 21:15:44.000430   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:44.014718   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:44.025263   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:44.035086   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:44.035133   60628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 21:15:44.089390   60628 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 21:15:44.089499   60628 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:44.275319   60628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:44.275496   60628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:44.275594   60628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:44.529521   60628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:42.141256   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:42.640563   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.140785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.640773   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.141155   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.641415   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.140534   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.641492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.141203   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.259301   60948 kubeadm.go:1088] duration metric: took 15.572687129s to wait for elevateKubeSystemPrivileges.
	I1212 21:15:46.259339   60948 kubeadm.go:406] StartCluster complete in 5m46.398198596s
	I1212 21:15:46.259364   60948 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.259455   60948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:15:46.261128   60948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.261410   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:15:46.261582   60948 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:15:46.261654   60948 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261676   60948 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-372099"
	W1212 21:15:46.261691   60948 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:15:46.261690   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:15:46.261729   60948 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261739   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.261745   60948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-372099"
	I1212 21:15:46.262128   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262150   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262176   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262204   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262371   60948 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-372099"
	I1212 21:15:46.262388   60948 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-372099"
	W1212 21:15:46.262396   60948 addons.go:240] addon metrics-server should already be in state true
	I1212 21:15:46.262431   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.262755   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262775   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.280829   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 21:15:46.281025   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1212 21:15:46.281167   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I1212 21:15:46.281451   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.282027   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282043   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282307   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282340   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282381   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282455   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282466   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.282760   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282816   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.283348   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283365   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283377   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.283388   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.286570   60948 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-372099"
	W1212 21:15:46.286591   60948 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:15:46.286618   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.287021   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.287041   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.300740   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1212 21:15:46.301674   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.301993   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I1212 21:15:46.302303   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.302317   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.302667   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.302772   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.302940   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.303112   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.303127   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.303537   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.304537   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.306285   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.308411   60948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:15:46.307398   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1212 21:15:46.307432   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.310694   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:15:46.310717   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:15:46.310737   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.311358   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.312839   60948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:15:44.530987   60628 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:44.531136   60628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:44.531267   60628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:44.531359   60628 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:44.531879   60628 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:44.532386   60628 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:44.533944   60628 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:44.535037   60628 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:44.536175   60628 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:44.537226   60628 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:44.537964   60628 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:44.538451   60628 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:44.538551   60628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:44.841462   60628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:45.059424   60628 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:15:45.613097   60628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:46.221274   60628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:46.372266   60628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:46.373199   60628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:46.376094   60628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:46.311872   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.314010   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.314158   60948 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.314170   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:15:46.314187   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.314387   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.314450   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.314958   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.314985   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.315221   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.315264   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.315563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.315745   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.315925   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.316191   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.322472   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324106   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.324142   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324390   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.324651   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.324861   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.325008   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.339982   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I1212 21:15:46.340365   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.340889   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.340915   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.341242   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.341434   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.343069   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.343366   60948 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.343384   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:15:46.343402   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.346212   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346596   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.346626   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346882   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.347322   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.347482   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.347618   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	W1212 21:15:46.380698   60948 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-372099" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:15:46.380724   60948 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1212 21:15:46.380745   60948 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:15:46.383175   60948 out.go:177] * Verifying Kubernetes components...
	I1212 21:15:46.384789   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:46.518292   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:15:46.518316   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:15:46.519393   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.554663   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.580810   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:15:46.580839   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:15:46.614409   60948 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.614501   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:15:46.628267   60948 node_ready.go:49] node "old-k8s-version-372099" has status "Ready":"True"
	I1212 21:15:46.628302   60948 node_ready.go:38] duration metric: took 13.858882ms waiting for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.628318   60948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:46.651927   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:46.651957   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:15:46.655191   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:46.734455   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:47.462832   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462859   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.462837   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462930   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465016   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465028   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465047   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465057   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465066   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465018   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465027   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465126   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465143   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465155   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465440   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465459   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465460   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465477   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465462   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465509   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.509931   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.509955   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.510242   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.510268   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.510289   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.529296   60948 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 21:15:47.740624   60948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006125978s)
	I1212 21:15:47.740686   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.740704   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.741066   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741082   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741104   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.741117   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741344   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741370   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741380   60948 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-372099"
	I1212 21:15:47.741382   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.743094   60948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:15:46.377620   60628 out.go:204]   - Booting up control plane ...
	I1212 21:15:46.377753   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:46.380316   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:46.381669   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:46.400406   60628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:46.401911   60628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:46.402016   60628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 21:15:46.577916   60628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:47.744911   60948 addons.go:502] enable addons completed in 1.483323446s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:15:48.879924   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:51.240011   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.081961   60628 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503798 seconds
	I1212 21:15:55.108753   60628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:55.132442   60628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:55.675426   60628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:55.675616   60628 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-343495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:15:56.197198   60628 kubeadm.go:322] [bootstrap-token] Using token: 6e6rca.dj99vsq9tzjoif3m
	I1212 21:15:56.198596   60628 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:56.198756   60628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:56.204758   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:15:56.217506   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:56.221482   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:56.225791   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:56.231024   60628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:56.249696   60628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:15:56.516070   60628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:56.613203   60628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:56.613227   60628 kubeadm.go:322] 
	I1212 21:15:56.613315   60628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:56.613340   60628 kubeadm.go:322] 
	I1212 21:15:56.613432   60628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:56.613447   60628 kubeadm.go:322] 
	I1212 21:15:56.613501   60628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:56.613588   60628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:56.613671   60628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:56.613682   60628 kubeadm.go:322] 
	I1212 21:15:56.613755   60628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 21:15:56.613762   60628 kubeadm.go:322] 
	I1212 21:15:56.613822   60628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:15:56.613832   60628 kubeadm.go:322] 
	I1212 21:15:56.613903   60628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:56.614004   60628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:56.614104   60628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:56.614116   60628 kubeadm.go:322] 
	I1212 21:15:56.614244   60628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:15:56.614369   60628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:56.614388   60628 kubeadm.go:322] 
	I1212 21:15:56.614507   60628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614653   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:56.614682   60628 kubeadm.go:322] 	--control-plane 
	I1212 21:15:56.614689   60628 kubeadm.go:322] 
	I1212 21:15:56.614787   60628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:56.614797   60628 kubeadm.go:322] 
	I1212 21:15:56.614865   60628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614993   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:56.616155   60628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:56.616184   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:15:56.616197   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:56.618787   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:53.240376   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.738865   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:56.620193   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:56.653642   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:56.701431   60628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:56.701520   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.701521   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=no-preload-343495 minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.765645   60628 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:57.021925   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.162627   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.772366   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.239852   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.239881   60948 pod_ready.go:81] duration metric: took 10.584655594s waiting for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.239895   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245919   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.245943   60948 pod_ready.go:81] duration metric: took 6.039649ms waiting for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245955   60948 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251905   60948 pod_ready.go:92] pod "kube-proxy-vzqkz" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.251933   60948 pod_ready.go:81] duration metric: took 5.969732ms waiting for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251943   60948 pod_ready.go:38] duration metric: took 10.623613273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:57.251963   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:15:57.252021   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:15:57.271808   60948 api_server.go:72] duration metric: took 10.891018678s to wait for apiserver process to appear ...
	I1212 21:15:57.271834   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:15:57.271853   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:15:57.279544   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:15:57.280373   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:15:57.280393   60948 api_server.go:131] duration metric: took 8.55283ms to wait for apiserver health ...
	I1212 21:15:57.280401   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:57.284489   60948 system_pods.go:59] 5 kube-system pods found
	I1212 21:15:57.284516   60948 system_pods.go:61] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.284520   60948 system_pods.go:61] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.284525   60948 system_pods.go:61] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.284531   60948 system_pods.go:61] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.284535   60948 system_pods.go:61] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.284542   60948 system_pods.go:74] duration metric: took 4.136571ms to wait for pod list to return data ...
	I1212 21:15:57.284549   60948 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:57.288616   60948 default_sa.go:45] found service account: "default"
	I1212 21:15:57.288643   60948 default_sa.go:55] duration metric: took 4.087698ms for default service account to be created ...
	I1212 21:15:57.288653   60948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:57.292785   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.292807   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.292812   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.292816   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.292822   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.292827   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.292842   60948 retry.go:31] will retry after 207.544988ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.505885   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.505911   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.505917   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.505921   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.505928   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.505932   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.505949   60948 retry.go:31] will retry after 367.076908ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.878466   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.878501   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.878509   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.878514   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.878520   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.878527   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.878547   60948 retry.go:31] will retry after 381.308829ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.264211   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.264237   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.264243   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.264247   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.264256   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.264262   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.264290   60948 retry.go:31] will retry after 366.461937ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.638206   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.638229   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.638234   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.638238   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.638245   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.638249   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.638276   60948 retry.go:31] will retry after 512.413163ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.156233   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.156263   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.156268   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.156272   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.156279   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.156284   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.156301   60948 retry.go:31] will retry after 775.973999ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.937928   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.937958   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.937966   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.937973   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.937983   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.937990   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.938009   60948 retry.go:31] will retry after 831.74396ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:00.775403   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:00.775427   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:00.775432   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:00.775436   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:00.775442   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:00.775447   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:00.775461   60948 retry.go:31] will retry after 1.069326929s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:01.849879   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:01.849906   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:01.849911   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:01.849915   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:01.849922   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:01.849927   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:01.849944   60948 retry.go:31] will retry after 1.540430535s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.271568   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:58.772443   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.271781   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.771732   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.272235   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.771891   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.271870   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.772445   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.271997   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.772496   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.395395   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:03.395421   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:03.395427   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:03.395431   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:03.395437   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:03.395442   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:03.395458   60948 retry.go:31] will retry after 2.25868002s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:05.661953   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:05.661988   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:05.661997   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:05.662005   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:05.662016   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:05.662026   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:05.662047   60948 retry.go:31] will retry after 2.893719866s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:03.272067   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.771992   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.272187   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.772518   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.272480   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.772460   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.272463   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.772291   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.271662   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.772063   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.272491   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.414409   60628 kubeadm.go:1088] duration metric: took 11.712956328s to wait for elevateKubeSystemPrivileges.
	I1212 21:16:08.414452   60628 kubeadm.go:406] StartCluster complete in 5m10.714058162s
	I1212 21:16:08.414480   60628 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.414582   60628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:16:08.417772   60628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.418132   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:16:08.418167   60628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:16:08.418267   60628 addons.go:69] Setting storage-provisioner=true in profile "no-preload-343495"
	I1212 21:16:08.418281   60628 addons.go:69] Setting default-storageclass=true in profile "no-preload-343495"
	I1212 21:16:08.418289   60628 addons.go:231] Setting addon storage-provisioner=true in "no-preload-343495"
	W1212 21:16:08.418297   60628 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:16:08.418301   60628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-343495"
	I1212 21:16:08.418310   60628 addons.go:69] Setting metrics-server=true in profile "no-preload-343495"
	I1212 21:16:08.418344   60628 addons.go:231] Setting addon metrics-server=true in "no-preload-343495"
	I1212 21:16:08.418349   60628 host.go:66] Checking if "no-preload-343495" exists ...
	W1212 21:16:08.418353   60628 addons.go:240] addon metrics-server should already be in state true
	I1212 21:16:08.418367   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:16:08.418401   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418810   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418850   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.437816   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I1212 21:16:08.438320   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.438921   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.438945   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.439225   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I1212 21:16:08.439418   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.439740   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.439809   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1212 21:16:08.440064   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.440092   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.440471   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.440491   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.440499   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.440887   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.440978   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.441002   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.441399   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.441442   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.441724   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.441960   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.446221   60628 addons.go:231] Setting addon default-storageclass=true in "no-preload-343495"
	W1212 21:16:08.446247   60628 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:16:08.446276   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.446655   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.446690   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.456479   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1212 21:16:08.456883   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.457330   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.457343   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.457784   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.457958   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.459741   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.461624   60628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:16:08.462951   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:16:08.462963   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:16:08.462978   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.462595   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1212 21:16:08.463831   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.464424   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.464443   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.465295   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.465627   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.467919   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468652   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.468681   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468905   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.469083   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.469197   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.469296   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.472614   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.474536   60628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:16:08.475957   60628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.475976   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:16:08.475995   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.476821   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I1212 21:16:08.477241   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.477772   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.477796   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.478322   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.479408   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.479457   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.479725   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480262   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.480285   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480565   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.480760   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.480909   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.481087   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.496182   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1212 21:16:08.496703   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.497250   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.497275   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.497705   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.497959   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.499696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.500049   60628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.500071   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:16:08.500098   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.503216   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503689   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.503717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503979   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.504187   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.504348   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.504521   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.519292   60628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-343495" context rescaled to 1 replicas
	I1212 21:16:08.519324   60628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:16:08.521243   60628 out.go:177] * Verifying Kubernetes components...
	I1212 21:16:08.522602   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:08.637693   60628 node_ready.go:35] waiting up to 6m0s for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.638072   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:16:08.640594   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:16:08.640620   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:16:08.645008   60628 node_ready.go:49] node "no-preload-343495" has status "Ready":"True"
	I1212 21:16:08.645041   60628 node_ready.go:38] duration metric: took 7.313798ms waiting for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.645056   60628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.650650   60628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658528   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.658556   60628 pod_ready.go:81] duration metric: took 7.881265ms waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658569   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682938   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.682962   60628 pod_ready.go:81] duration metric: took 24.384424ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682975   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.683220   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.688105   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:16:08.688131   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:16:08.695007   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.695034   60628 pod_ready.go:81] duration metric: took 12.050101ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.695046   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701206   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.701230   60628 pod_ready.go:81] duration metric: took 6.174333ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701240   60628 pod_ready.go:38] duration metric: took 56.165354ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.701262   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:16:08.701321   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:16:08.744650   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.758415   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:08.758444   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:16:08.841030   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:09.387385   60628 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 21:16:10.224475   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541186317s)
	I1212 21:16:10.224515   60628 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.523170366s)
	I1212 21:16:10.224548   60628 api_server.go:72] duration metric: took 1.705201863s to wait for apiserver process to appear ...
	I1212 21:16:10.224561   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:16:10.224571   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.479890747s)
	I1212 21:16:10.224606   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224579   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:16:10.224621   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.224522   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224686   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225001   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225050   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225065   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225074   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225011   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225019   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225020   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225115   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225130   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225140   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225347   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225358   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225507   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225572   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225600   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.233359   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:16:10.237567   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:16:10.237593   60628 api_server.go:131] duration metric: took 13.024501ms to wait for apiserver health ...
	I1212 21:16:10.237602   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:16:10.268851   60628 system_pods.go:59] 9 kube-system pods found
	I1212 21:16:10.268891   60628 system_pods.go:61] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268903   60628 system_pods.go:61] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268912   60628 system_pods.go:61] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.268920   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.268927   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.268936   60628 system_pods.go:61] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.268943   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.268953   60628 system_pods.go:61] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.268963   60628 system_pods.go:61] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending
	I1212 21:16:10.268971   60628 system_pods.go:74] duration metric: took 31.361836ms to wait for pod list to return data ...
	I1212 21:16:10.268987   60628 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:16:10.270947   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.270971   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.271270   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.271290   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.280134   60628 default_sa.go:45] found service account: "default"
	I1212 21:16:10.280159   60628 default_sa.go:55] duration metric: took 11.163534ms for default service account to be created ...
	I1212 21:16:10.280169   60628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:16:10.314822   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.314864   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314873   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314879   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.314886   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.314893   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.314903   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.314912   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.314923   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.314937   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.314957   60628 retry.go:31] will retry after 284.074155ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.328798   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.487713481s)
	I1212 21:16:10.328851   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.328866   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329251   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329276   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329276   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.329291   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.329304   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329540   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329556   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329566   60628 addons.go:467] Verifying addon metrics-server=true in "no-preload-343495"
	I1212 21:16:10.332474   60628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:16:08.563361   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:08.563393   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:08.563401   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:08.563408   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:08.563420   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:08.563427   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:08.563449   60948 retry.go:31] will retry after 2.871673075s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:11.441932   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:11.441970   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:11.441977   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:11.441983   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:11.441993   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.442003   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:11.442022   60948 retry.go:31] will retry after 3.977150615s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:10.333924   60628 addons.go:502] enable addons completed in 1.915760025s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:16:10.616684   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.616724   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616739   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616748   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.616757   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.616764   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.616775   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.616785   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.616795   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.616807   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.616825   60628 retry.go:31] will retry after 291.662068ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.919064   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.919104   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919114   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919125   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.919135   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.919142   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.919152   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.919160   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.919211   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.919229   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.919259   60628 retry.go:31] will retry after 381.992278ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.312083   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.312115   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.312121   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.312128   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.312137   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.312146   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.312152   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.312162   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.312170   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.312189   60628 retry.go:31] will retry after 495.705235ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.820167   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.820200   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.820205   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.820212   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.820217   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.820222   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.820226   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.820232   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.820237   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.820254   60628 retry.go:31] will retry after 635.810888ms: missing components: kube-dns, kube-proxy
	I1212 21:16:12.464096   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:12.464139   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Running
	I1212 21:16:12.464145   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:12.464149   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:12.464154   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:12.464158   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Running
	I1212 21:16:12.464162   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:12.464168   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:12.464176   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Running
	I1212 21:16:12.464185   60628 system_pods.go:126] duration metric: took 2.184010512s to wait for k8s-apps to be running ...
	I1212 21:16:12.464192   60628 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:12.464272   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:12.480090   60628 system_svc.go:56] duration metric: took 15.887114ms WaitForService to wait for kubelet.
	I1212 21:16:12.480124   60628 kubeadm.go:581] duration metric: took 3.960778694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:12.480163   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:12.483564   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:12.483589   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:12.483601   60628 node_conditions.go:105] duration metric: took 3.433071ms to run NodePressure ...
	I1212 21:16:12.483612   60628 start.go:228] waiting for startup goroutines ...
	I1212 21:16:12.483617   60628 start.go:233] waiting for cluster config update ...
	I1212 21:16:12.483626   60628 start.go:242] writing updated cluster config ...
	I1212 21:16:12.483887   60628 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:12.534680   60628 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 21:16:12.536622   60628 out.go:177] * Done! kubectl is now configured to use "no-preload-343495" cluster and "default" namespace by default
	I1212 21:16:15.424662   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:15.424691   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:15.424697   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:15.424701   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:15.424707   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:15.424712   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:15.424728   60948 retry.go:31] will retry after 4.920488737s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:20.351078   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:20.351107   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:20.351112   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:20.351116   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:20.351122   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:20.351127   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:20.351143   60948 retry.go:31] will retry after 5.718245097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:26.077073   60948 system_pods.go:86] 6 kube-system pods found
	I1212 21:16:26.077097   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:26.077103   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:26.077107   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Pending
	I1212 21:16:26.077111   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:26.077117   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:26.077122   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:26.077139   60948 retry.go:31] will retry after 8.251519223s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:34.334757   60948 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:34.334782   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:34.334787   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:34.334791   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:34.334796   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Pending
	I1212 21:16:34.334799   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:34.334804   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:34.334811   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:34.334815   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:34.334830   60948 retry.go:31] will retry after 8.584990669s: missing components: kube-apiserver, kube-scheduler
	I1212 21:16:42.927591   60948 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:42.927618   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:42.927624   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:42.927628   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:42.927632   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Running
	I1212 21:16:42.927637   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:42.927642   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:42.927647   60948 system_pods.go:89] "kube-scheduler-old-k8s-version-372099" [0e3e4e58-289f-47f1-999b-8fd87b90558a] Running
	I1212 21:16:42.927653   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:42.927658   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:42.927667   60948 system_pods.go:126] duration metric: took 45.639007967s to wait for k8s-apps to be running ...
	I1212 21:16:42.927673   60948 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:42.927715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:42.948680   60948 system_svc.go:56] duration metric: took 20.9943ms WaitForService to wait for kubelet.
	I1212 21:16:42.948711   60948 kubeadm.go:581] duration metric: took 56.56793182s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:42.948735   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:42.952462   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:42.952493   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:42.952505   60948 node_conditions.go:105] duration metric: took 3.763543ms to run NodePressure ...
	I1212 21:16:42.952518   60948 start.go:228] waiting for startup goroutines ...
	I1212 21:16:42.952527   60948 start.go:233] waiting for cluster config update ...
	I1212 21:16:42.952541   60948 start.go:242] writing updated cluster config ...
	I1212 21:16:42.952847   60948 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:43.001964   60948 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 21:16:43.003962   60948 out.go:177] 
	W1212 21:16:43.005327   60948 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 21:16:43.006827   60948 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 21:16:43.008259   60948 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-372099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:09:17 UTC, ends at Tue 2023-12-12 21:23:19 UTC. --
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.051591559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=17f3781e-0635-4788-941a-2bfde06e7c72 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.053308300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6e8641b0-6e05-44a3-b441-79b94f5d89b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.054046911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416199054027072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6e8641b0-6e05-44a3-b441-79b94f5d89b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.054881048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5fe39bf6-adfb-4d8f-9e44-8c1631d403e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.054937030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5fe39bf6-adfb-4d8f-9e44-8c1631d403e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.055154558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5fe39bf6-adfb-4d8f-9e44-8c1631d403e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.098054827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fb1e0415-d05f-4cf3-b6a8-83991b34fcc3 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.098154867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fb1e0415-d05f-4cf3-b6a8-83991b34fcc3 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.099472433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=61190713-4b88-4cfc-953a-b81a848a2706 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.099979646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416199099963631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=61190713-4b88-4cfc-953a-b81a848a2706 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.100653070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22ae2240-ee6b-4972-8032-66c9ef095ea5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.100816419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22ae2240-ee6b-4972-8032-66c9ef095ea5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.101010745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22ae2240-ee6b-4972-8032-66c9ef095ea5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.140057812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d2474ffc-4eeb-4487-8457-df8a0fbbed24 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.140145321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d2474ffc-4eeb-4487-8457-df8a0fbbed24 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.141624062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9b3c4110-1980-471a-8929-f1d81d42d475 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.142217328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416199142199803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9b3c4110-1980-471a-8929-f1d81d42d475 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.143266807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=58f2a9ac-1b6e-4081-8a72-dd6b070388b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.143444937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=58f2a9ac-1b6e-4081-8a72-dd6b070388b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.143792520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=58f2a9ac-1b6e-4081-8a72-dd6b070388b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.144366044Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=ece3824c-62d5-42b3-88a5-d1153ef52916 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.144634235Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c3f151c8-69ac-4783-b525-035f3955a799,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415399944238065,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T21:09:51.972971179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zj5wn,Uid:8f51596e-d7e1-40de-9394-5788ff7fde7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415399640683
912,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T21:09:51.972975785Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05ba6512412b4f50875e320705c7ce71bfc731ff1a4b0f9ce6b5f56b092bf342,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-v978l,Uid:5870eb0c-b40b-4fc5-bf09-de1ed799993c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415396046150638,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-v978l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5870eb0c-b40b-4fc5-bf09-de1ed799993c,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T21:09:51.
972983979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a48c6632-0d79-4b43-ad2b-78c090c9b1f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415392342991814,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-12T21:09:51.972968801Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&PodSandboxMetadata{Name:kube-proxy-nsv4w,Uid:621a8605-777d-4fab-8884-16de1091e792,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415392315076312,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-777d-4fab-8884-16de1091e792,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2023-12-12T21:09:51.972979177Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-831188,Uid:d237398c7af5429d966c72c07b5538ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385511379089,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d966c72c07b5538ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d237398c7af5429d966c72c07b5538ba,kubernetes.io/config.seen: 2023-12-12T21:09:44.968239057Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-831188,Uid:1b5bc1d0aeeed3fa69e39920f199d3e
4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385499837429,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1b5bc1d0aeeed3fa69e39920f199d3e4,kubernetes.io/config.seen: 2023-12-12T21:09:44.968238196Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-831188,Uid:ae7f31f59995b6074da63b24822c15b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385490848276,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f599
95b6074da63b24822c15b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.163:2379,kubernetes.io/config.hash: ae7f31f59995b6074da63b24822c15b8,kubernetes.io/config.seen: 2023-12-12T21:09:44.968232926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-831188,Uid:4bc6a9c01130e3674685653344c69aea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385486624631,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.163:8443,kubernetes.io/config.hash: 4bc6a9c01130e3674685653344
c69aea,kubernetes.io/config.seen: 2023-12-12T21:09:44.968236871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=ece3824c-62d5-42b3-88a5-d1153ef52916 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.145521313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c90c1d62-83e2-4e0b-822d-6998d8e846c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.145578250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c90c1d62-83e2-4e0b-822d-6998d8e846c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:23:19 embed-certs-831188 crio[716]: time="2023-12-12 21:23:19.145904465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c90c1d62-83e2-4e0b-822d-6998d8e846c8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1703f1d5be8cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   77dd00140750b       storage-provisioner
	d0a52d85abb3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f57cc23b61498       busybox
	41483ce2844cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   b79746546c948       coredns-5dd5756b68-zj5wn
	bc1393c2dcb25       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   45b833dcc94fd       kube-proxy-nsv4w
	0285b9b54f023       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   77dd00140750b       storage-provisioner
	6a76cf81a377e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   a8e06ca0d1aea       kube-scheduler-embed-certs-831188
	aa3b65804db3f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   f76c5991fd388       etcd-embed-certs-831188
	a8ada7ed54f93       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   b168b4263329f       kube-controller-manager-embed-certs-831188
	c8c7037baeaee       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   1cabfc321a2f0       kube-apiserver-embed-certs-831188
	
	
	==> coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54396 - 61564 "HINFO IN 667314211497334327.1269787668080689230. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008873466s
	
	
	==> describe nodes <==
	Name:               embed-certs-831188
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-831188
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=embed-certs-831188
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_01_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:01:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-831188
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 21:23:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:20:36 +0000   Tue, 12 Dec 2023 21:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:20:36 +0000   Tue, 12 Dec 2023 21:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:20:36 +0000   Tue, 12 Dec 2023 21:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:20:36 +0000   Tue, 12 Dec 2023 21:10:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.163
	  Hostname:    embed-certs-831188
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0060569b9eb9492eba6d6021718c1259
	  System UUID:                0060569b-9eb9-492e-ba6d-6021718c1259
	  Boot ID:                    33626dbd-5e61-42d3-9329-56af64902a4b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-zj5wn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-831188                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-831188             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-831188    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-nsv4w                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-831188             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-v978l               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-831188 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-831188 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-831188 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-831188 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node embed-certs-831188 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-831188 event: Registered Node embed-certs-831188 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-831188 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-831188 event: Registered Node embed-certs-831188 in Controller
	
	
	==> dmesg <==
	[Dec12 21:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069842] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.417665] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.559118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152673] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.451102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.183510] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.111389] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.152932] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.114050] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.238035] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.208630] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[ +15.004631] kauditd_printk_skb: 19 callbacks suppressed
	[Dec12 21:10] hrtimer: interrupt took 5107291 ns
	
	
	==> etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] <==
	{"level":"info","ts":"2023-12-12T21:09:58.435518Z","caller":"traceutil/trace.go:171","msg":"trace[177265340] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"122.510856ms","start":"2023-12-12T21:09:58.312988Z","end":"2023-12-12T21:09:58.435499Z","steps":["trace[177265340] 'process raft request'  (duration: 122.434525ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:09:58.773311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.785539ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314121624500545274 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c8f50eef04\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c8f50eef04\" value_size:832 lease:3090749587645769245 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T21:09:58.77342Z","caller":"traceutil/trace.go:171","msg":"trace[838246603] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"333.531409ms","start":"2023-12-12T21:09:58.439878Z","end":"2023-12-12T21:09:58.773409Z","steps":["trace[838246603] 'process raft request'  (duration: 115.338009ms)","trace[838246603] 'compare'  (duration: 217.663411ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T21:09:58.773463Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:09:58.439862Z","time spent":"333.583511ms","remote":"127.0.0.1:48184","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":927,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c8f50eef04\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c8f50eef04\" value_size:832 lease:3090749587645769245 >> failure:<>"}
	{"level":"warn","ts":"2023-12-12T21:10:22.671154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.168556ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12314121624500545502 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c925a4c82a\" mod_revision:551 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c925a4c82a\" value_size:674 lease:3090749587645769245 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c925a4c82a\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T21:10:22.671396Z","caller":"traceutil/trace.go:171","msg":"trace[1029541781] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"640.903862ms","start":"2023-12-12T21:10:22.030469Z","end":"2023-12-12T21:10:22.671373Z","steps":["trace[1029541781] 'process raft request'  (duration: 369.174252ms)","trace[1029541781] 'compare'  (duration: 270.647508ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T21:10:22.671559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.030457Z","time spent":"641.064194ms","remote":"127.0.0.1:48184","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":769,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c925a4c82a\" mod_revision:551 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c925a4c82a\" value_size:674 lease:3090749587645769245 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-v978l.17a031c925a4c82a\" > >"}
	{"level":"warn","ts":"2023-12-12T21:10:22.921652Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":12314121624500545501,"retry-timeout":"500ms"}
	{"level":"info","ts":"2023-12-12T21:10:23.026275Z","caller":"traceutil/trace.go:171","msg":"trace[1085323657] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"641.654385ms","start":"2023-12-12T21:10:22.384607Z","end":"2023-12-12T21:10:23.026262Z","steps":["trace[1085323657] 'process raft request'  (duration: 641.620162ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.026468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.384592Z","time spent":"641.801894ms","remote":"127.0.0.1:48206","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5738,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/embed-certs-831188\" mod_revision:562 > success:<request_put:<key:\"/registry/minions/embed-certs-831188\" value_size:5694 >> failure:<request_range:<key:\"/registry/minions/embed-certs-831188\" > >"}
	{"level":"info","ts":"2023-12-12T21:10:23.026892Z","caller":"traceutil/trace.go:171","msg":"trace[185495373] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"857.38573ms","start":"2023-12-12T21:10:22.169491Z","end":"2023-12-12T21:10:23.026876Z","steps":["trace[185495373] 'process raft request'  (duration: 856.70121ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.027026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.169474Z","time spent":"857.50333ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-831188\" mod_revision:584 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-831188\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-831188\" > >"}
	{"level":"info","ts":"2023-12-12T21:10:23.027114Z","caller":"traceutil/trace.go:171","msg":"trace[45005210] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"996.030041ms","start":"2023-12-12T21:10:22.031069Z","end":"2023-12-12T21:10:23.027099Z","steps":["trace[45005210] 'process raft request'  (duration: 935.884906ms)","trace[45005210] 'compare'  (duration: 59.111224ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T21:10:23.027229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.031058Z","time spent":"996.132207ms","remote":"127.0.0.1:48208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4056,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" mod_revision:581 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" value_size:3990 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" > >"}
	{"level":"info","ts":"2023-12-12T21:10:23.088758Z","caller":"traceutil/trace.go:171","msg":"trace[469649662] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:624; }","duration":"668.242757ms","start":"2023-12-12T21:10:22.420425Z","end":"2023-12-12T21:10:23.088668Z","steps":["trace[469649662] 'read index received'  (duration: 546.402678ms)","trace[469649662] 'applied index is now lower than readState.Index'  (duration: 121.838921ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T21:10:23.088942Z","caller":"traceutil/trace.go:171","msg":"trace[2010594245] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"675.813493ms","start":"2023-12-12T21:10:22.413117Z","end":"2023-12-12T21:10:23.08893Z","steps":["trace[2010594245] 'process raft request'  (duration: 675.415887ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.089062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.014524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.163\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2023-12-12T21:10:23.089134Z","caller":"traceutil/trace.go:171","msg":"trace[1617143922] range","detail":"{range_begin:/registry/masterleases/192.168.50.163; range_end:; response_count:1; response_revision:592; }","duration":"177.101026ms","start":"2023-12-12T21:10:22.912022Z","end":"2023-12-12T21:10:23.089123Z","steps":["trace[1617143922] 'agreement among raft nodes before linearized reading'  (duration: 176.973964ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.08924Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.413096Z","time spent":"675.958407ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-a4vo6d4pdmy2ttomkw477gqi2i\" mod_revision:585 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-a4vo6d4pdmy2ttomkw477gqi2i\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-a4vo6d4pdmy2ttomkw477gqi2i\" > >"}
	{"level":"warn","ts":"2023-12-12T21:10:23.089314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"668.924494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" ","response":"range_response_count:1 size:4071"}
	{"level":"info","ts":"2023-12-12T21:10:23.09024Z","caller":"traceutil/trace.go:171","msg":"trace[817726128] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l; range_end:; response_count:1; response_revision:592; }","duration":"669.848602ms","start":"2023-12-12T21:10:22.420381Z","end":"2023-12-12T21:10:23.090229Z","steps":["trace[817726128] 'agreement among raft nodes before linearized reading'  (duration: 668.903836ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.090299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.420364Z","time spent":"669.918803ms","remote":"127.0.0.1:48208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4094,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" "}
	{"level":"info","ts":"2023-12-12T21:19:50.024472Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":823}
	{"level":"info","ts":"2023-12-12T21:19:50.027372Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":823,"took":"2.442291ms","hash":1401188404}
	{"level":"info","ts":"2023-12-12T21:19:50.027542Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1401188404,"revision":823,"compact-revision":-1}
	
	
	==> kernel <==
	 21:23:19 up 14 min,  0 users,  load average: 0.19, 0.21, 0.21
	Linux embed-certs-831188 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] <==
	I1212 21:19:51.664397       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:19:52.664614       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:19:52.664758       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:19:52.664786       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:19:52.664837       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:19:52.664887       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:19:52.666674       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:20:51.480868       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:20:52.665835       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:20:52.665903       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:20:52.665912       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:20:52.667135       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:20:52.667222       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:20:52.667230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:21:51.480521       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 21:22:51.480331       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:22:52.666412       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:22:52.666519       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:22:52.666592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:22:52.667673       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:22:52.667841       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:22:52.667876       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] <==
	I1212 21:17:34.762602       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:18:04.279681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:18:04.774036       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:18:34.285401       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:18:34.787035       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:19:04.291406       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:19:04.796291       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:19:34.297458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:19:34.806900       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:20:04.304011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:20:04.817172       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:20:34.310170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:20:34.828950       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:20:58.027552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="239.33µs"
	E1212 21:21:04.317015       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:21:04.839312       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:21:12.026252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="125.315µs"
	E1212 21:21:34.326635       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:21:34.848130       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:22:04.332597       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:22:04.856675       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:22:34.339617       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:22:34.865599       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:23:04.344672       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:23:04.875611       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] <==
	I1212 21:09:54.296343       1 server_others.go:69] "Using iptables proxy"
	I1212 21:09:54.313405       1 node.go:141] Successfully retrieved node IP: 192.168.50.163
	I1212 21:09:54.377951       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 21:09:54.378009       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 21:09:54.383758       1 server_others.go:152] "Using iptables Proxier"
	I1212 21:09:54.383918       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 21:09:54.384298       1 server.go:846] "Version info" version="v1.28.4"
	I1212 21:09:54.384355       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:09:54.385295       1 config.go:188] "Starting service config controller"
	I1212 21:09:54.385361       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 21:09:54.385418       1 config.go:97] "Starting endpoint slice config controller"
	I1212 21:09:54.385443       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 21:09:54.387343       1 config.go:315] "Starting node config controller"
	I1212 21:09:54.387612       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 21:09:54.485920       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 21:09:54.485959       1 shared_informer.go:318] Caches are synced for service config
	I1212 21:09:54.488110       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] <==
	W1212 21:09:51.642460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:09:51.642508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 21:09:51.642580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 21:09:51.642592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 21:09:51.642643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:09:51.642652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 21:09:51.642766       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:09:51.642780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 21:09:51.647035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 21:09:51.647093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 21:09:51.647166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:09:51.647175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 21:09:51.647226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 21:09:51.647235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 21:09:51.647277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 21:09:51.647285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 21:09:51.647331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:09:51.647340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 21:09:51.650087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 21:09:51.650148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 21:09:51.660149       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:09:51.660210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 21:09:51.660293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 21:09:51.660304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1212 21:09:53.222107       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:09:17 UTC, ends at Tue 2023-12-12 21:23:19 UTC. --
	Dec 12 21:20:45 embed-certs-831188 kubelet[924]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:20:46 embed-certs-831188 kubelet[924]: E1212 21:20:46.022612     924 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 12 21:20:46 embed-certs-831188 kubelet[924]: E1212 21:20:46.022787     924 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 12 21:20:46 embed-certs-831188 kubelet[924]: E1212 21:20:46.023022     924 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rqs7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-v978l_kube-system(5870eb0c-b40b-4fc5-bf09-de1ed799993c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:20:46 embed-certs-831188 kubelet[924]: E1212 21:20:46.023084     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:20:58 embed-certs-831188 kubelet[924]: E1212 21:20:58.009104     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:21:12 embed-certs-831188 kubelet[924]: E1212 21:21:12.009434     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:21:26 embed-certs-831188 kubelet[924]: E1212 21:21:26.010078     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:21:37 embed-certs-831188 kubelet[924]: E1212 21:21:37.010218     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:21:45 embed-certs-831188 kubelet[924]: E1212 21:21:45.028194     924 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:21:45 embed-certs-831188 kubelet[924]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:21:45 embed-certs-831188 kubelet[924]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:21:45 embed-certs-831188 kubelet[924]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:21:48 embed-certs-831188 kubelet[924]: E1212 21:21:48.009274     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:21:59 embed-certs-831188 kubelet[924]: E1212 21:21:59.010595     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:22:12 embed-certs-831188 kubelet[924]: E1212 21:22:12.009822     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:22:24 embed-certs-831188 kubelet[924]: E1212 21:22:24.009157     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:22:38 embed-certs-831188 kubelet[924]: E1212 21:22:38.009359     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:22:45 embed-certs-831188 kubelet[924]: E1212 21:22:45.033617     924 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:22:45 embed-certs-831188 kubelet[924]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:22:45 embed-certs-831188 kubelet[924]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:22:45 embed-certs-831188 kubelet[924]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:22:51 embed-certs-831188 kubelet[924]: E1212 21:22:51.010264     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:23:04 embed-certs-831188 kubelet[924]: E1212 21:23:04.009545     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:23:17 embed-certs-831188 kubelet[924]: E1212 21:23:17.012115     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	
	
	==> storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] <==
	I1212 21:09:54.207217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 21:10:24.213631       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] <==
	I1212 21:10:24.488158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:10:24.501805       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:10:24.502236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:10:41.915442       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:10:41.916131       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-831188_7283bd0a-dad0-48c5-92a8-289512fb0d28!
	I1212 21:10:41.917683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9972a0d0-bc39-4530-9b64-42ff37a1ad1e", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-831188_7283bd0a-dad0-48c5-92a8-289512fb0d28 became leader
	I1212 21:10:42.016527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-831188_7283bd0a-dad0-48c5-92a8-289512fb0d28!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831188 -n embed-certs-831188
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-831188 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-v978l
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-831188 describe pod metrics-server-57f55c9bc5-v978l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-831188 describe pod metrics-server-57f55c9bc5-v978l: exit status 1 (83.392356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-v978l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-831188 describe pod metrics-server-57f55c9bc5-v978l: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:15:20.139106   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:16:02.521652   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:16:06.483133   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:24:08.065064848 +0000 UTC m=+5249.285237447
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-171828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-171828 logs -n 25: (1.698675644s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-690675 sudo cat                              | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo find                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo crio                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-690675                                       | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:06:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:06:02.112042   61298 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:06:02.112158   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112166   61298 out.go:309] Setting ErrFile to fd 2...
	I1212 21:06:02.112171   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112352   61298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:06:02.112888   61298 out.go:303] Setting JSON to false
	I1212 21:06:02.113799   61298 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6516,"bootTime":1702408646,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:06:02.113858   61298 start.go:138] virtualization: kvm guest
	I1212 21:06:02.116152   61298 out.go:177] * [default-k8s-diff-port-171828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:06:02.118325   61298 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:06:02.118373   61298 notify.go:220] Checking for updates...
	I1212 21:06:02.120036   61298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:06:02.121697   61298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:06:02.123350   61298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:06:02.124958   61298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:06:02.126355   61298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:06:02.128221   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:06:02.128652   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.128709   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.143368   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I1212 21:06:02.143740   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.144319   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.144342   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.144674   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.144877   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.145143   61298 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:06:02.145473   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.145519   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.160165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1212 21:06:02.160611   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.161098   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.161129   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.161410   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.161605   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.198703   61298 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:06:02.199992   61298 start.go:298] selected driver: kvm2
	I1212 21:06:02.200011   61298 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.200131   61298 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:06:02.200848   61298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.200920   61298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:06:02.215947   61298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:06:02.216333   61298 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:02.216397   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:06:02.216410   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:06:02.216420   61298 start_flags.go:323] config:
	{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-17182
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.216597   61298 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.218773   61298 out.go:177] * Starting control plane node default-k8s-diff-port-171828 in cluster default-k8s-diff-port-171828
	I1212 21:05:59.427580   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:02.220182   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:06:02.220241   61298 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 21:06:02.220256   61298 cache.go:56] Caching tarball of preloaded images
	I1212 21:06:02.220379   61298 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:06:02.220393   61298 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 21:06:02.220514   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:06:02.220739   61298 start.go:365] acquiring machines lock for default-k8s-diff-port-171828: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:06:05.507538   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:08.579605   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:14.659535   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:17.731542   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:23.811575   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:26.883541   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:32.963600   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:36.035521   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:42.115475   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:45.187562   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:51.267528   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:54.339532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:00.419548   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:03.491553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:09.571514   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:12.643531   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:18.723534   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:21.795549   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:27.875554   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:30.947574   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:37.027523   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:40.099490   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:46.179518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:49.251577   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:55.331532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:58.403520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:04.483547   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:07.555546   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:13.635553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:16.707518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:22.787551   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:25.859539   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:31.939511   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:35.011564   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:41.091518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:44.163443   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:50.243526   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:53.315520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:59.395550   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:02.467533   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:05.471384   60833 start.go:369] acquired machines lock for "embed-certs-831188" in 4m18.011296189s
	I1212 21:09:05.471446   60833 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:05.471453   60833 fix.go:54] fixHost starting: 
	I1212 21:09:05.471803   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:05.471837   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:05.486451   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1212 21:09:05.486900   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:05.487381   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:05.487404   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:05.487715   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:05.487879   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:05.488020   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:05.489670   60833 fix.go:102] recreateIfNeeded on embed-certs-831188: state=Stopped err=<nil>
	I1212 21:09:05.489704   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	W1212 21:09:05.489876   60833 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:05.492059   60833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831188" ...
	I1212 21:09:05.493752   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Start
	I1212 21:09:05.493959   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring networks are active...
	I1212 21:09:05.494984   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network default is active
	I1212 21:09:05.495423   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network mk-embed-certs-831188 is active
	I1212 21:09:05.495761   60833 main.go:141] libmachine: (embed-certs-831188) Getting domain xml...
	I1212 21:09:05.496421   60833 main.go:141] libmachine: (embed-certs-831188) Creating domain...
	I1212 21:09:06.732388   60833 main.go:141] libmachine: (embed-certs-831188) Waiting to get IP...
	I1212 21:09:06.733338   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:06.733708   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:06.733785   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:06.733676   61768 retry.go:31] will retry after 284.906493ms: waiting for machine to come up
	I1212 21:09:07.020284   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.020718   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.020745   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.020671   61768 retry.go:31] will retry after 293.274895ms: waiting for machine to come up
	I1212 21:09:07.315313   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.315686   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.315712   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.315641   61768 retry.go:31] will retry after 361.328832ms: waiting for machine to come up
	I1212 21:09:05.469256   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:05.469293   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:09:05.471233   60628 machine.go:91] provisioned docker machine in 4m37.408714984s
	I1212 21:09:05.471294   60628 fix.go:56] fixHost completed within 4m37.431179626s
	I1212 21:09:05.471299   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 4m37.431203273s
	W1212 21:09:05.471318   60628 start.go:694] error starting host: provision: host is not running
	W1212 21:09:05.471416   60628 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 21:09:05.471424   60628 start.go:709] Will try again in 5 seconds ...
	I1212 21:09:07.678255   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.678636   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.678700   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.678599   61768 retry.go:31] will retry after 604.479659ms: waiting for machine to come up
	I1212 21:09:08.284350   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:08.284754   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:08.284779   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:08.284701   61768 retry.go:31] will retry after 731.323448ms: waiting for machine to come up
	I1212 21:09:09.017564   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.018007   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.018040   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.017968   61768 retry.go:31] will retry after 734.083609ms: waiting for machine to come up
	I1212 21:09:09.753947   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.754423   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.754446   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.754362   61768 retry.go:31] will retry after 786.816799ms: waiting for machine to come up
	I1212 21:09:10.542771   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:10.543304   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:10.543341   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:10.543264   61768 retry.go:31] will retry after 1.40646031s: waiting for machine to come up
	I1212 21:09:11.951821   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:11.952180   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:11.952223   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:11.952135   61768 retry.go:31] will retry after 1.693488962s: waiting for machine to come up
	I1212 21:09:10.473087   60628 start.go:365] acquiring machines lock for no-preload-343495: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:09:13.646801   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:13.647256   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:13.647299   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:13.647180   61768 retry.go:31] will retry after 1.856056162s: waiting for machine to come up
	I1212 21:09:15.504815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:15.505228   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:15.505258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:15.505175   61768 retry.go:31] will retry after 2.008264333s: waiting for machine to come up
	I1212 21:09:17.516231   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:17.516653   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:17.516683   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:17.516604   61768 retry.go:31] will retry after 3.239343078s: waiting for machine to come up
	I1212 21:09:20.757258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:20.757696   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:20.757725   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:20.757654   61768 retry.go:31] will retry after 4.315081016s: waiting for machine to come up
	I1212 21:09:26.424166   60948 start.go:369] acquired machines lock for "old-k8s-version-372099" in 4m29.049387398s
	I1212 21:09:26.424241   60948 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:26.424254   60948 fix.go:54] fixHost starting: 
	I1212 21:09:26.424715   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:26.424763   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:26.444634   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I1212 21:09:26.445043   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:26.445520   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:09:26.445538   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:26.445863   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:26.446052   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:26.446192   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:09:26.447776   60948 fix.go:102] recreateIfNeeded on old-k8s-version-372099: state=Stopped err=<nil>
	I1212 21:09:26.447804   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	W1212 21:09:26.448015   60948 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:26.450126   60948 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-372099" ...
	I1212 21:09:26.451553   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Start
	I1212 21:09:26.451708   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring networks are active...
	I1212 21:09:26.452388   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network default is active
	I1212 21:09:26.452655   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network mk-old-k8s-version-372099 is active
	I1212 21:09:26.453124   60948 main.go:141] libmachine: (old-k8s-version-372099) Getting domain xml...
	I1212 21:09:26.453799   60948 main.go:141] libmachine: (old-k8s-version-372099) Creating domain...
	I1212 21:09:25.078112   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078553   60833 main.go:141] libmachine: (embed-certs-831188) Found IP for machine: 192.168.50.163
	I1212 21:09:25.078585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has current primary IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078596   60833 main.go:141] libmachine: (embed-certs-831188) Reserving static IP address...
	I1212 21:09:25.078997   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.079030   60833 main.go:141] libmachine: (embed-certs-831188) Reserved static IP address: 192.168.50.163
	I1212 21:09:25.079052   60833 main.go:141] libmachine: (embed-certs-831188) DBG | skip adding static IP to network mk-embed-certs-831188 - found existing host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"}
	I1212 21:09:25.079071   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Getting to WaitForSSH function...
	I1212 21:09:25.079085   60833 main.go:141] libmachine: (embed-certs-831188) Waiting for SSH to be available...
	I1212 21:09:25.080901   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081194   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.081242   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081366   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH client type: external
	I1212 21:09:25.081388   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa (-rw-------)
	I1212 21:09:25.081416   60833 main.go:141] libmachine: (embed-certs-831188) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:25.081426   60833 main.go:141] libmachine: (embed-certs-831188) DBG | About to run SSH command:
	I1212 21:09:25.081438   60833 main.go:141] libmachine: (embed-certs-831188) DBG | exit 0
	I1212 21:09:25.171277   60833 main.go:141] libmachine: (embed-certs-831188) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:25.171663   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetConfigRaw
	I1212 21:09:25.172345   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.174944   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175302   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.175333   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175553   60833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/config.json ...
	I1212 21:09:25.175828   60833 machine.go:88] provisioning docker machine ...
	I1212 21:09:25.175855   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:25.176065   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176212   60833 buildroot.go:166] provisioning hostname "embed-certs-831188"
	I1212 21:09:25.176233   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176371   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.178556   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178823   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.178850   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178957   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.179142   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179295   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179436   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.179558   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.179895   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.179910   60833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831188 && echo "embed-certs-831188" | sudo tee /etc/hostname
	I1212 21:09:25.312418   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831188
	
	I1212 21:09:25.312457   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.315156   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315529   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.315570   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315707   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.315895   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316053   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316211   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.316378   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.316840   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.316869   60833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831188' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831188/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831188' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:25.448302   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:25.448332   60833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:25.448353   60833 buildroot.go:174] setting up certificates
	I1212 21:09:25.448362   60833 provision.go:83] configureAuth start
	I1212 21:09:25.448369   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.448691   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.451262   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.451639   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451807   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.454144   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454434   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.454460   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454596   60833 provision.go:138] copyHostCerts
	I1212 21:09:25.454665   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:25.454689   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:25.454775   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:25.454928   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:25.454940   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:25.454984   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:25.455062   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:25.455073   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:25.455106   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:25.455171   60833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831188 san=[192.168.50.163 192.168.50.163 localhost 127.0.0.1 minikube embed-certs-831188]
	I1212 21:09:25.678855   60833 provision.go:172] copyRemoteCerts
	I1212 21:09:25.678942   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:25.678975   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.681866   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682221   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.682249   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682399   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.682590   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.682730   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.682856   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:25.773454   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:25.796334   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 21:09:25.818680   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:09:25.840234   60833 provision.go:86] duration metric: configureAuth took 391.845214ms
	I1212 21:09:25.840268   60833 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:25.840497   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:25.840643   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.842988   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843431   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.843482   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843586   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.843772   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.843946   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.844066   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.844227   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.844542   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.844563   60833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:26.167363   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:26.167388   60833 machine.go:91] provisioned docker machine in 991.541719ms
	I1212 21:09:26.167398   60833 start.go:300] post-start starting for "embed-certs-831188" (driver="kvm2")
	I1212 21:09:26.167408   60833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:26.167444   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.167739   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:26.167763   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.170188   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170569   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.170611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170712   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.170880   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.171049   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.171194   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.261249   60833 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:26.265429   60833 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:26.265451   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:26.265522   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:26.265602   60833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:26.265695   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:26.274054   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:26.297890   60833 start.go:303] post-start completed in 130.478946ms
	I1212 21:09:26.297915   60833 fix.go:56] fixHost completed within 20.826462284s
	I1212 21:09:26.297934   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.300585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.300934   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.300975   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.301144   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.301359   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301529   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301665   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.301797   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:26.302153   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:26.302164   60833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:26.423978   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415366.370228005
	
	I1212 21:09:26.424008   60833 fix.go:206] guest clock: 1702415366.370228005
	I1212 21:09:26.424019   60833 fix.go:219] Guest: 2023-12-12 21:09:26.370228005 +0000 UTC Remote: 2023-12-12 21:09:26.297918475 +0000 UTC m=+278.991313322 (delta=72.30953ms)
	I1212 21:09:26.424052   60833 fix.go:190] guest clock delta is within tolerance: 72.30953ms
	I1212 21:09:26.424061   60833 start.go:83] releasing machines lock for "embed-certs-831188", held for 20.952636536s
	I1212 21:09:26.424090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.424347   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:26.427068   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427479   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.427519   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427592   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428173   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428344   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428414   60833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:26.428470   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.428492   60833 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:26.428508   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.430943   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431251   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431371   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431393   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431548   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431631   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431654   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431776   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.431844   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431998   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.432040   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432285   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.432490   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.548980   60833 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:26.555211   60833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:26.707171   60833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:26.714564   60833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:26.714658   60833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:26.730858   60833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:26.730890   60833 start.go:475] detecting cgroup driver to use...
	I1212 21:09:26.730963   60833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:26.751316   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:26.766700   60833 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:26.766767   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:26.783157   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:26.799559   60833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:26.908659   60833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:27.029185   60833 docker.go:219] disabling docker service ...
	I1212 21:09:27.029245   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:27.042969   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:27.055477   60833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:27.174297   60833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:27.285338   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:27.299676   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:27.317832   60833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:09:27.317900   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.329270   60833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:27.329346   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.341201   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.353243   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.365796   60833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:27.377700   60833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:27.388796   60833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:27.388858   60833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:27.401983   60833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:27.411527   60833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:27.523326   60833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:27.702370   60833 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:27.702435   60833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:27.707537   60833 start.go:543] Will wait 60s for crictl version
	I1212 21:09:27.707619   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:09:27.711502   60833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:27.750808   60833 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:27.750912   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.799419   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.848900   60833 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:09:27.722142   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting to get IP...
	I1212 21:09:27.723300   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.723736   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.723806   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.723702   61894 retry.go:31] will retry after 267.755874ms: waiting for machine to come up
	I1212 21:09:27.993406   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.993917   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.993947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.993865   61894 retry.go:31] will retry after 314.872831ms: waiting for machine to come up
	I1212 21:09:28.310446   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.311022   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.311051   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.310971   61894 retry.go:31] will retry after 435.368111ms: waiting for machine to come up
	I1212 21:09:28.747774   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.748267   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.748299   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.748238   61894 retry.go:31] will retry after 521.305154ms: waiting for machine to come up
	I1212 21:09:29.270989   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.271519   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.271553   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.271446   61894 retry.go:31] will retry after 482.42376ms: waiting for machine to come up
	I1212 21:09:29.755222   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.755724   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.755755   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.755671   61894 retry.go:31] will retry after 676.918794ms: waiting for machine to come up
	I1212 21:09:30.434488   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:30.435072   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:30.435103   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:30.435025   61894 retry.go:31] will retry after 876.618903ms: waiting for machine to come up
	I1212 21:09:31.313270   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:31.313826   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:31.313857   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:31.313775   61894 retry.go:31] will retry after 1.03353638s: waiting for machine to come up
	I1212 21:09:27.850614   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:27.853633   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854033   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:27.854069   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854243   60833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:27.858626   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:27.871999   60833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:09:27.872058   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:27.920758   60833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:09:27.920832   60833 ssh_runner.go:195] Run: which lz4
	I1212 21:09:27.924857   60833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:27.929186   60833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:27.929220   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:09:29.834194   60833 crio.go:444] Took 1.909381 seconds to copy over tarball
	I1212 21:09:29.834285   60833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:32.348562   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:32.349019   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:32.349041   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:32.348978   61894 retry.go:31] will retry after 1.80085882s: waiting for machine to come up
	I1212 21:09:34.151943   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:34.152375   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:34.152416   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:34.152343   61894 retry.go:31] will retry after 2.08304575s: waiting for machine to come up
	I1212 21:09:36.238682   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:36.239115   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:36.239149   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:36.239074   61894 retry.go:31] will retry after 2.109809124s: waiting for machine to come up
	I1212 21:09:33.005355   60833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.171034001s)
	I1212 21:09:33.005386   60833 crio.go:451] Took 3.171167 seconds to extract the tarball
	I1212 21:09:33.005398   60833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:33.046773   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:33.101606   60833 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:09:33.101627   60833 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:09:33.101689   60833 ssh_runner.go:195] Run: crio config
	I1212 21:09:33.162553   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:33.162584   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:33.162608   60833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:33.162637   60833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831188 NodeName:embed-certs-831188 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:09:33.162806   60833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831188"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:33.162923   60833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-831188 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:33.162978   60833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:09:33.171937   60833 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:33.172013   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:33.180480   60833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 21:09:33.197675   60833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:33.214560   60833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 21:09:33.234926   60833 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:33.238913   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:33.255261   60833 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188 for IP: 192.168.50.163
	I1212 21:09:33.255320   60833 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:33.255462   60833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:33.255496   60833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:33.255561   60833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/client.key
	I1212 21:09:33.255641   60833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key.6a576ed8
	I1212 21:09:33.255686   60833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key
	I1212 21:09:33.255781   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:33.255807   60833 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:33.255814   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:33.255835   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:33.255864   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:33.255885   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:33.255931   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:33.256505   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:33.282336   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:33.307179   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:33.332468   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:09:33.357444   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:33.383372   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:33.409070   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:33.438164   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:33.467676   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:33.496645   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:33.523126   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:33.548366   60833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:33.567745   60833 ssh_runner.go:195] Run: openssl version
	I1212 21:09:33.573716   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:33.584221   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589689   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589767   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.595880   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:33.609574   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:33.623129   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629541   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629615   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.635862   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:33.646421   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:33.656686   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661397   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661473   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.667092   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:33.677905   60833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:33.682795   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:33.689346   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:33.695822   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:33.702368   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:33.708500   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:33.714793   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:33.721121   60833 kubeadm.go:404] StartCluster: {Name:embed-certs-831188 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:33.721252   60833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:33.721319   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:33.759428   60833 cri.go:89] found id: ""
	I1212 21:09:33.759502   60833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:33.769592   60833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:33.769617   60833 kubeadm.go:636] restartCluster start
	I1212 21:09:33.769712   60833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:33.779313   60833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.780838   60833 kubeconfig.go:92] found "embed-certs-831188" server: "https://192.168.50.163:8443"
	I1212 21:09:33.784096   60833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:33.793192   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.793314   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.805112   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.805139   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.805196   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.816975   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.317757   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.317858   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.329702   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.817167   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.817266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.828633   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.317136   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.317230   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.328803   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.818032   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.818121   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.829428   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.318141   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.318253   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.330749   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.817284   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.817367   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.828787   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:37.317183   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.317266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.334557   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.350131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:38.350522   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:38.350546   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:38.350484   61894 retry.go:31] will retry after 2.423656351s: waiting for machine to come up
	I1212 21:09:40.777036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:40.777455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:40.777489   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:40.777399   61894 retry.go:31] will retry after 3.275180742s: waiting for machine to come up
	I1212 21:09:37.817090   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.817219   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.833813   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.317328   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.317409   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.334684   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.817255   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.817353   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.831011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.317555   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.317648   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.330189   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.817759   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.817866   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.830611   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.317127   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.317198   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.329508   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.817580   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.817677   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.829289   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.317853   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.317928   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.331394   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.818013   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.818098   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.829011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:42.317526   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.317610   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.329211   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:44.056058   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:44.056558   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:44.056587   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:44.056517   61894 retry.go:31] will retry after 4.729711581s: waiting for machine to come up
	I1212 21:09:42.818081   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.818166   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.829930   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.317420   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:43.317526   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:43.328536   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.794084   60833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:09:43.794118   60833 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:09:43.794129   60833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:09:43.794192   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:43.842360   60833 cri.go:89] found id: ""
	I1212 21:09:43.842431   60833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:09:43.859189   60833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:09:43.869065   60833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:09:43.869135   60833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878614   60833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878644   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.011533   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.544591   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.757944   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.850440   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.942874   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:09:44.942967   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:44.954886   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.466556   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.966545   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.465991   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.966021   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.987348   60833 api_server.go:72] duration metric: took 2.04447632s to wait for apiserver process to appear ...
	I1212 21:09:46.987374   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:09:46.987388   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.987890   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:46.987926   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.988389   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:50.008527   61298 start.go:369] acquired machines lock for "default-k8s-diff-port-171828" in 3m47.787737833s
	I1212 21:09:50.008595   61298 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:50.008607   61298 fix.go:54] fixHost starting: 
	I1212 21:09:50.008999   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:50.009035   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:50.025692   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I1212 21:09:50.026047   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:50.026541   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:09:50.026563   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:50.026945   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:50.027160   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:09:50.027344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:09:50.029005   61298 fix.go:102] recreateIfNeeded on default-k8s-diff-port-171828: state=Stopped err=<nil>
	I1212 21:09:50.029031   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	W1212 21:09:50.029193   61298 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:50.031805   61298 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-171828" ...
	I1212 21:09:48.789770   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790158   60948 main.go:141] libmachine: (old-k8s-version-372099) Found IP for machine: 192.168.39.202
	I1212 21:09:48.790172   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserving static IP address...
	I1212 21:09:48.790195   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790655   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.790683   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserved static IP address: 192.168.39.202
	I1212 21:09:48.790701   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | skip adding static IP to network mk-old-k8s-version-372099 - found existing host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"}
	I1212 21:09:48.790719   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Getting to WaitForSSH function...
	I1212 21:09:48.790736   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting for SSH to be available...
	I1212 21:09:48.793069   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793392   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.793418   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793542   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH client type: external
	I1212 21:09:48.793582   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa (-rw-------)
	I1212 21:09:48.793610   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:48.793620   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | About to run SSH command:
	I1212 21:09:48.793629   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | exit 0
	I1212 21:09:48.883487   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:48.883885   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetConfigRaw
	I1212 21:09:48.884519   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:48.887128   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.887485   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887734   60948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/config.json ...
	I1212 21:09:48.887918   60948 machine.go:88] provisioning docker machine ...
	I1212 21:09:48.887936   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:48.888097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888225   60948 buildroot.go:166] provisioning hostname "old-k8s-version-372099"
	I1212 21:09:48.888238   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888378   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:48.890462   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890820   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.890847   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890982   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:48.891139   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891289   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:48.891597   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:48.891940   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:48.891955   60948 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-372099 && echo "old-k8s-version-372099" | sudo tee /etc/hostname
	I1212 21:09:49.012923   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-372099
	
	I1212 21:09:49.012954   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.015698   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016076   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.016117   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016245   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.016437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016583   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016710   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.016859   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.017308   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.017338   60948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-372099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-372099/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-372099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:49.144804   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:49.144842   60948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:49.144875   60948 buildroot.go:174] setting up certificates
	I1212 21:09:49.144885   60948 provision.go:83] configureAuth start
	I1212 21:09:49.144896   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:49.145181   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:49.147947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148294   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.148340   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148475   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.151218   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.151697   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.151760   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.152022   60948 provision.go:138] copyHostCerts
	I1212 21:09:49.152083   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:49.152102   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:49.152172   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:49.152299   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:49.152307   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:49.152335   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:49.152402   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:49.152407   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:49.152428   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:49.152485   60948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-372099 san=[192.168.39.202 192.168.39.202 localhost 127.0.0.1 minikube old-k8s-version-372099]
	I1212 21:09:49.298406   60948 provision.go:172] copyRemoteCerts
	I1212 21:09:49.298478   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:49.298508   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.301384   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301696   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.301729   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301948   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.302156   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.302320   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.302442   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.385046   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:09:49.409667   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:49.434002   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:09:49.458872   60948 provision.go:86] duration metric: configureAuth took 313.97378ms
	I1212 21:09:49.458907   60948 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:49.459075   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:09:49.459143   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.461794   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.462183   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462373   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.462574   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462730   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462857   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.463042   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.463594   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.463641   60948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:49.767652   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:49.767745   60948 machine.go:91] provisioned docker machine in 879.803204ms
	I1212 21:09:49.767772   60948 start.go:300] post-start starting for "old-k8s-version-372099" (driver="kvm2")
	I1212 21:09:49.767785   60948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:49.767812   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:49.768162   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:49.768191   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.770970   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771351   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.771388   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771595   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.771805   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.772009   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.772155   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.857053   60948 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:49.861510   60948 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:49.861535   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:49.861600   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:49.861672   60948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:49.861781   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:49.869967   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:49.892746   60948 start.go:303] post-start completed in 124.959403ms
	I1212 21:09:49.892768   60948 fix.go:56] fixHost completed within 23.468514721s
	I1212 21:09:49.892790   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.895273   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895618   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.895653   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895776   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.895951   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896269   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.896433   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.896887   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.896904   60948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:50.008384   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415389.953345991
	
	I1212 21:09:50.008407   60948 fix.go:206] guest clock: 1702415389.953345991
	I1212 21:09:50.008415   60948 fix.go:219] Guest: 2023-12-12 21:09:49.953345991 +0000 UTC Remote: 2023-12-12 21:09:49.89277138 +0000 UTC m=+292.853960893 (delta=60.574611ms)
	I1212 21:09:50.008441   60948 fix.go:190] guest clock delta is within tolerance: 60.574611ms
	I1212 21:09:50.008445   60948 start.go:83] releasing machines lock for "old-k8s-version-372099", held for 23.584233709s
	I1212 21:09:50.008469   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.008757   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:50.011577   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.011930   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.011958   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.012109   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012750   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012964   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.013059   60948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:50.013102   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.013195   60948 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:50.013222   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.016031   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016304   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016525   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016566   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016720   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.016815   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016855   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016883   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017008   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.017080   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017186   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017256   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.017357   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017520   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.125100   60948 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:50.132264   60948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:50.278965   60948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:50.286230   60948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:50.286308   60948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:50.301165   60948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:50.301192   60948 start.go:475] detecting cgroup driver to use...
	I1212 21:09:50.301256   60948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:50.318715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:50.331943   60948 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:50.332013   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:50.348872   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:50.366970   60948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:50.492936   60948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:50.620103   60948 docker.go:219] disabling docker service ...
	I1212 21:09:50.620185   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:50.632962   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:50.644797   60948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:50.759039   60948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:50.884352   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:50.896549   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:50.919987   60948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 21:09:50.920056   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.932147   60948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:50.932224   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.941195   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.951010   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.962752   60948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:50.975125   60948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:50.984906   60948 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:50.984971   60948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:50.999594   60948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:51.010344   60948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:51.114607   60948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:51.318020   60948 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:51.318108   60948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:51.325048   60948 start.go:543] Will wait 60s for crictl version
	I1212 21:09:51.325134   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:51.329905   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:51.377974   60948 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:51.378075   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.444024   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.512531   60948 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 21:09:51.514171   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:51.517083   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517636   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:51.517667   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517886   60948 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:51.522137   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:51.538124   60948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 21:09:51.538191   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:51.594603   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:51.594688   60948 ssh_runner.go:195] Run: which lz4
	I1212 21:09:51.599732   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:51.604811   60948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:51.604844   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 21:09:50.033553   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Start
	I1212 21:09:50.033768   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring networks are active...
	I1212 21:09:50.034638   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network default is active
	I1212 21:09:50.035192   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network mk-default-k8s-diff-port-171828 is active
	I1212 21:09:50.035630   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Getting domain xml...
	I1212 21:09:50.036380   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Creating domain...
	I1212 21:09:51.530274   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting to get IP...
	I1212 21:09:51.531329   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531766   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.531744   62039 retry.go:31] will retry after 271.90604ms: waiting for machine to come up
	I1212 21:09:51.805469   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806028   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806062   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.805967   62039 retry.go:31] will retry after 338.221769ms: waiting for machine to come up
	I1212 21:09:47.488610   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.543731   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.543786   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.543807   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.654915   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.654949   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.989408   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.996278   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:51.996337   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.488734   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.496289   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:52.496327   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.989065   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.997013   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:09:53.012736   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:09:53.012777   60833 api_server.go:131] duration metric: took 6.025395735s to wait for apiserver health ...
	I1212 21:09:53.012789   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:53.012806   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:53.014820   60833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:09:53.016797   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:09:53.047434   60833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:09:53.095811   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:09:53.115354   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:09:53.115441   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:09:53.115465   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:09:53.115504   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:09:53.115532   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:09:53.115551   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:09:53.115582   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:09:53.115607   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:09:53.115633   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:09:53.115643   60833 system_pods.go:74] duration metric: took 19.808922ms to wait for pod list to return data ...
	I1212 21:09:53.115655   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:09:53.127006   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:09:53.127044   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:09:53.127058   60833 node_conditions.go:105] duration metric: took 11.39604ms to run NodePressure ...
	I1212 21:09:53.127079   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:53.597509   60833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603447   60833 kubeadm.go:787] kubelet initialised
	I1212 21:09:53.603476   60833 kubeadm.go:788] duration metric: took 5.932359ms waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603486   60833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:53.616570   60833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.623514   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623547   60833 pod_ready.go:81] duration metric: took 6.940441ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.623560   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623570   60833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.631395   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631426   60833 pod_ready.go:81] duration metric: took 7.844548ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.631438   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631453   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.649647   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649681   60833 pod_ready.go:81] duration metric: took 18.215042ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.649693   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649702   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.662239   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662271   60833 pod_ready.go:81] duration metric: took 12.552977ms waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.662285   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662298   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.005841   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005879   60833 pod_ready.go:81] duration metric: took 343.569867ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.005892   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005908   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.403249   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403280   60833 pod_ready.go:81] duration metric: took 397.363687ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.403291   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403297   60833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.802330   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802367   60833 pod_ready.go:81] duration metric: took 399.057426ms waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.802380   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802390   60833 pod_ready.go:38] duration metric: took 1.198894195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:54.802413   60833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:09:54.822125   60833 ops.go:34] apiserver oom_adj: -16
	I1212 21:09:54.822154   60833 kubeadm.go:640] restartCluster took 21.052529291s
	I1212 21:09:54.822173   60833 kubeadm.go:406] StartCluster complete in 21.101061651s
	I1212 21:09:54.822194   60833 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.822273   60833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:09:54.825185   60833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.825490   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:09:54.825622   60833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:09:54.825714   60833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831188"
	I1212 21:09:54.825735   60833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-831188"
	W1212 21:09:54.825756   60833 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:09:54.825806   60833 addons.go:69] Setting metrics-server=true in profile "embed-certs-831188"
	I1212 21:09:54.825837   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.825849   60833 addons.go:231] Setting addon metrics-server=true in "embed-certs-831188"
	W1212 21:09:54.825863   60833 addons.go:240] addon metrics-server should already be in state true
	I1212 21:09:54.825969   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.826276   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826309   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826522   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826588   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826731   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:54.826767   60833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831188"
	I1212 21:09:54.826847   60833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831188"
	I1212 21:09:54.827349   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.827409   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.834506   60833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-831188" context rescaled to 1 replicas
	I1212 21:09:54.834614   60833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:09:54.837122   60833 out.go:177] * Verifying Kubernetes components...
	I1212 21:09:54.839094   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:09:54.846081   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I1212 21:09:54.846737   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847078   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I1212 21:09:54.847367   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.847387   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.847518   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847775   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848031   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.848053   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.848061   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.848355   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848912   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.848955   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.849635   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1212 21:09:54.849986   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.852255   60833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-831188"
	W1212 21:09:54.852279   60833 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:09:54.852306   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.852727   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.852758   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.853259   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.853289   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.853643   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.854187   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.854223   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.870249   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I1212 21:09:54.870805   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.871406   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.871430   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.871920   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.872090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.873692   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.876011   60833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:54.874681   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1212 21:09:54.877102   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1212 21:09:54.877666   60833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:54.877691   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:09:54.877710   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.877993   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878108   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878602   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878622   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.878738   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878754   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.879004   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.879362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.879426   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.880445   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.880486   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.881642   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.883715   60833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:09:54.885165   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:09:54.885184   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:09:54.885199   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.883021   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.883884   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.885257   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.885295   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.885442   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.885598   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.885727   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.893093   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893096   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.893152   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.893190   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.893534   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.893676   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.902833   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1212 21:09:54.903320   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.903867   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.903888   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.904337   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.904535   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.906183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.906443   60833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:54.906463   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:09:54.906484   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.909330   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.909914   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.909954   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.910136   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.910328   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.910492   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.910639   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:55.020642   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:55.123475   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:55.141398   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:09:55.141429   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:09:55.200799   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:09:55.200833   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:09:55.275142   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:55.275172   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:09:55.308985   60833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831188" to be "Ready" ...
	I1212 21:09:55.309133   60833 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:09:55.341251   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:56.829715   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706199185s)
	I1212 21:09:56.829768   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829780   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.829784   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.809111646s)
	I1212 21:09:56.829860   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829870   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830143   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.830166   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.830178   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.830188   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830267   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.831959   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832013   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.832048   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.831765   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831788   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831794   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.832139   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832236   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.833156   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.833196   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.843517   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.843542   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.843815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.843870   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.843880   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.023745   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.682445607s)
	I1212 21:09:57.023801   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.023815   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024252   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024263   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:57.024276   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024287   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.024303   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024676   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024691   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024706   60833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-831188"
	I1212 21:09:53.564404   60948 crio.go:444] Took 1.964711 seconds to copy over tarball
	I1212 21:09:53.564488   60948 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:57.052627   60948 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.488106402s)
	I1212 21:09:57.052657   60948 crio.go:451] Took 3.488218 seconds to extract the tarball
	I1212 21:09:57.052669   60948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:52.145724   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146453   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146484   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.146352   62039 retry.go:31] will retry after 482.98499ms: waiting for machine to come up
	I1212 21:09:52.630862   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631317   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631343   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.631232   62039 retry.go:31] will retry after 480.323704ms: waiting for machine to come up
	I1212 21:09:53.113661   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.114249   62039 retry.go:31] will retry after 649.543956ms: waiting for machine to come up
	I1212 21:09:53.765102   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765613   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765643   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.765558   62039 retry.go:31] will retry after 824.137815ms: waiting for machine to come up
	I1212 21:09:54.591782   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592356   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592391   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:54.592273   62039 retry.go:31] will retry after 874.563899ms: waiting for machine to come up
	I1212 21:09:55.468934   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469429   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469459   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:55.469393   62039 retry.go:31] will retry after 1.224276076s: waiting for machine to come up
	I1212 21:09:56.695111   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695604   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695637   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:56.695560   62039 retry.go:31] will retry after 1.207984075s: waiting for machine to come up
	I1212 21:09:57.157310   60833 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:09:57.322702   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:57.093318   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:57.723104   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:57.723132   60948 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:09:57.723259   60948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.723342   60948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.723442   60948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 21:09:57.723302   60948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724835   60948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 21:09:57.724864   60948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.724861   60948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.724836   60948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724853   60948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.724842   60948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.724847   60948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.724893   60948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.918047   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.920893   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.927072   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.928080   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.931259   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 21:09:57.932017   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.939580   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.990594   60948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 21:09:57.990667   60948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.990724   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.059759   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:58.095401   60948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 21:09:58.095451   60948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.095504   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138192   60948 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 21:09:58.138287   60948 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.138333   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138491   60948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 21:09:58.138532   60948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.138594   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145060   60948 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 21:09:58.145116   60948 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 21:09:58.145146   60948 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 21:09:58.145177   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145185   60948 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.145225   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145073   60948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 21:09:58.145250   60948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.145271   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145322   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:58.268621   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.268721   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.268774   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.268826   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 21:09:58.268863   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.268895   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.268956   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 21:09:58.408748   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 21:09:58.418795   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 21:09:58.418843   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 21:09:58.420451   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 21:09:58.420516   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 21:09:58.420577   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.420585   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 21:09:58.425621   60948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 21:09:58.425639   60948 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.425684   60948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 21:09:59.172682   60948 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 21:09:59.172736   60948 cache_images.go:92] LoadImages completed in 1.449590507s
	W1212 21:09:59.172819   60948 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 21:09:59.172900   60948 ssh_runner.go:195] Run: crio config
	I1212 21:09:59.238502   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:09:59.238522   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:59.238539   60948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:59.238560   60948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-372099 NodeName:old-k8s-version-372099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 21:09:59.238733   60948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-372099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-372099
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.202:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:59.238886   60948 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-372099 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:59.238953   60948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 21:09:59.249183   60948 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:59.249271   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:59.263171   60948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 21:09:59.281172   60948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:59.302622   60948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 21:09:59.323131   60948 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:59.327344   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:59.342182   60948 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099 for IP: 192.168.39.202
	I1212 21:09:59.342216   60948 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:59.342412   60948 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:59.342465   60948 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:59.342554   60948 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/client.key
	I1212 21:09:59.342659   60948 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key.9e66e972
	I1212 21:09:59.342723   60948 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key
	I1212 21:09:59.342854   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:59.342891   60948 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:59.342908   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:59.342947   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:59.342984   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:59.343024   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:59.343081   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:59.343948   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:59.375250   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:59.404892   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:59.434762   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:09:59.465696   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:59.496528   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:59.521739   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:59.545606   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:59.574153   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:59.599089   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:59.625217   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:59.654715   60948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:59.674946   60948 ssh_runner.go:195] Run: openssl version
	I1212 21:09:59.683295   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:59.697159   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702671   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702745   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.710931   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:59.723204   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:59.735713   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741621   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741715   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.748041   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:59.760217   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:59.772701   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778501   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778589   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.787066   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:59.803355   60948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:59.809920   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:59.819093   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:59.827918   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:59.836228   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:59.845437   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:59.852647   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:59.861170   60948 kubeadm.go:404] StartCluster: {Name:old-k8s-version-372099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:59.861285   60948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:59.861358   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:59.906807   60948 cri.go:89] found id: ""
	I1212 21:09:59.906885   60948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:59.919539   60948 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:59.919579   60948 kubeadm.go:636] restartCluster start
	I1212 21:09:59.919637   60948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:59.930547   60948 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.931845   60948 kubeconfig.go:92] found "old-k8s-version-372099" server: "https://192.168.39.202:8443"
	I1212 21:09:59.934471   60948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:59.945701   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.945780   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.959415   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.959438   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.959496   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.975677   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.476388   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.476469   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.493781   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.976367   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.976475   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.993084   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.476277   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.476362   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.490076   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.976393   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.976505   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.990771   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:57.905327   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905730   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:57.905649   62039 retry.go:31] will retry after 1.427858275s: waiting for machine to come up
	I1212 21:09:59.335284   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335735   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:59.335630   62039 retry.go:31] will retry after 1.773169552s: waiting for machine to come up
	I1212 21:10:01.110044   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110533   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:01.110468   62039 retry.go:31] will retry after 2.199207847s: waiting for machine to come up
	I1212 21:09:57.672094   60833 addons.go:502] enable addons completed in 2.846462968s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:09:59.822907   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:01.824673   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:02.325980   60833 node_ready.go:49] node "embed-certs-831188" has status "Ready":"True"
	I1212 21:10:02.326008   60833 node_ready.go:38] duration metric: took 7.016985612s waiting for node "embed-certs-831188" to be "Ready" ...
	I1212 21:10:02.326021   60833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:02.339547   60833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345609   60833 pod_ready.go:92] pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.345638   60833 pod_ready.go:81] duration metric: took 6.052243ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345652   60833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.476354   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.476429   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.489326   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:02.975846   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.975935   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.992975   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.476463   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.476577   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.489471   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.975762   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.975891   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.992773   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.476395   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.476510   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.489163   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.976403   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.976503   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.990508   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.475988   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.476108   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.489347   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.975811   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.975874   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.988996   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.475817   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.475896   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.487886   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.976376   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.976445   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.988627   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.312460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312859   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312892   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:03.312807   62039 retry.go:31] will retry after 4.329332977s: waiting for machine to come up
	I1212 21:10:02.864894   60833 pod_ready.go:92] pod "etcd-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.864921   60833 pod_ready.go:81] duration metric: took 519.26143ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.864935   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871360   60833 pod_ready.go:92] pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.871392   60833 pod_ready.go:81] duration metric: took 6.449389ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871406   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529203   60833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.529228   60833 pod_ready.go:81] duration metric: took 1.657813273s waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529243   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722607   60833 pod_ready.go:92] pod "kube-proxy-nsv4w" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.722631   60833 pod_ready.go:81] duration metric: took 193.381057ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722641   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124360   60833 pod_ready.go:92] pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:05.124388   60833 pod_ready.go:81] duration metric: took 401.739767ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124401   60833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:07.476521   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.476603   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.487362   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:07.976016   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.976101   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.987221   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.475793   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.475894   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.486641   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.976140   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.976262   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.987507   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.476080   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:09.476168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:09.487537   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.946342   60948 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:09.946377   60948 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:09.946412   60948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:09.946487   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:09.988850   60948 cri.go:89] found id: ""
	I1212 21:10:09.988939   60948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:10.004726   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:10.015722   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:10.015787   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025706   60948 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025743   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:10.156614   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.030056   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.219060   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.315587   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.398016   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:11.398110   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.411642   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.927297   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:07.644473   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644921   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644950   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:07.644868   62039 retry.go:31] will retry after 5.180616294s: waiting for machine to come up
	I1212 21:10:07.428366   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:09.929940   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.157275   60628 start.go:369] acquired machines lock for "no-preload-343495" in 1m3.684137096s
	I1212 21:10:14.157330   60628 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:10:14.157342   60628 fix.go:54] fixHost starting: 
	I1212 21:10:14.157767   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:14.157812   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:14.175936   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 21:10:14.176421   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:14.176957   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:10:14.176982   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:14.177380   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:14.177601   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:14.177804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:10:14.179672   60628 fix.go:102] recreateIfNeeded on no-preload-343495: state=Stopped err=<nil>
	I1212 21:10:14.179696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	W1212 21:10:14.179911   60628 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:10:14.183064   60628 out.go:177] * Restarting existing kvm2 VM for "no-preload-343495" ...
	I1212 21:10:12.828825   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.829471   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Found IP for machine: 192.168.72.253
	I1212 21:10:12.829501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserving static IP address...
	I1212 21:10:12.829530   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has current primary IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.830061   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.830110   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | skip adding static IP to network mk-default-k8s-diff-port-171828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"}
	I1212 21:10:12.830133   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserved static IP address: 192.168.72.253
	I1212 21:10:12.830152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Getting to WaitForSSH function...
	I1212 21:10:12.830163   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for SSH to be available...
	I1212 21:10:12.832654   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833033   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.833065   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833273   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH client type: external
	I1212 21:10:12.833302   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa (-rw-------)
	I1212 21:10:12.833335   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:12.833352   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | About to run SSH command:
	I1212 21:10:12.833370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | exit 0
	I1212 21:10:12.931871   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:12.932439   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetConfigRaw
	I1212 21:10:12.933250   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:12.936555   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937009   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.937051   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937341   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:10:12.937642   61298 machine.go:88] provisioning docker machine ...
	I1212 21:10:12.937669   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:12.937933   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938136   61298 buildroot.go:166] provisioning hostname "default-k8s-diff-port-171828"
	I1212 21:10:12.938161   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938373   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:12.941209   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941589   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.941620   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941796   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:12.941978   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942183   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:12.942539   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:12.942885   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:12.942904   61298 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-171828 && echo "default-k8s-diff-port-171828" | sudo tee /etc/hostname
	I1212 21:10:13.099123   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-171828
	
	I1212 21:10:13.099152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.102085   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.102496   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102756   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.102965   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103166   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.103580   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.104000   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.104034   61298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-171828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-171828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-171828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:13.246501   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:13.246535   61298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:13.246561   61298 buildroot.go:174] setting up certificates
	I1212 21:10:13.246577   61298 provision.go:83] configureAuth start
	I1212 21:10:13.246590   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:13.246875   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:13.249703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250010   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.250043   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250196   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.252501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.252814   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.252852   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.253086   61298 provision.go:138] copyHostCerts
	I1212 21:10:13.253151   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:13.253171   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:13.253266   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:13.253399   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:13.253412   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:13.253437   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:13.253501   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:13.253508   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:13.253526   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:13.253586   61298 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-171828 san=[192.168.72.253 192.168.72.253 localhost 127.0.0.1 minikube default-k8s-diff-port-171828]
	I1212 21:10:13.331755   61298 provision.go:172] copyRemoteCerts
	I1212 21:10:13.331819   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:13.331841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.334412   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334741   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.334777   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334981   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.335185   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.335369   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.335498   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.429448   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:10:13.454350   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:13.479200   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 21:10:13.505120   61298 provision.go:86] duration metric: configureAuth took 258.53005ms
	I1212 21:10:13.505151   61298 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:13.505370   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:13.505451   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.508400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.508826   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.508858   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.509144   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.509360   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509524   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.509829   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.510161   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.510184   61298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:13.874783   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:13.874810   61298 machine.go:91] provisioned docker machine in 937.151566ms
	I1212 21:10:13.874822   61298 start.go:300] post-start starting for "default-k8s-diff-port-171828" (driver="kvm2")
	I1212 21:10:13.874835   61298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:13.874853   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:13.875182   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:13.875213   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.877937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.878400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878640   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.878819   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.878984   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.879148   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.978276   61298 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:13.984077   61298 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:13.984114   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:13.984229   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:13.984309   61298 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:13.984391   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:13.996801   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:14.021773   61298 start.go:303] post-start completed in 146.935628ms
	I1212 21:10:14.021796   61298 fix.go:56] fixHost completed within 24.013191129s
	I1212 21:10:14.021815   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.024847   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.025227   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.025599   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025788   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025951   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.026106   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:14.026436   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:14.026452   61298 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:14.157053   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415414.138141396
	
	I1212 21:10:14.157082   61298 fix.go:206] guest clock: 1702415414.138141396
	I1212 21:10:14.157092   61298 fix.go:219] Guest: 2023-12-12 21:10:14.138141396 +0000 UTC Remote: 2023-12-12 21:10:14.021800288 +0000 UTC m=+251.962428882 (delta=116.341108ms)
	I1212 21:10:14.157130   61298 fix.go:190] guest clock delta is within tolerance: 116.341108ms
	I1212 21:10:14.157141   61298 start.go:83] releasing machines lock for "default-k8s-diff-port-171828", held for 24.148576854s
	I1212 21:10:14.157193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.157567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:14.160748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161134   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.161172   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161489   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162089   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162259   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162333   61298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:14.162389   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.162627   61298 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:14.162652   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.165726   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.165941   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166485   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166548   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166598   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166636   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166649   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166905   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166907   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167104   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167153   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167231   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.167349   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167500   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.294350   61298 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:14.301705   61298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:14.459967   61298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:14.467979   61298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:14.468043   61298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:14.483883   61298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:14.483910   61298 start.go:475] detecting cgroup driver to use...
	I1212 21:10:14.483976   61298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:14.498105   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:14.511716   61298 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:14.511784   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:14.525795   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:14.539213   61298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:14.658453   61298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:14.786222   61298 docker.go:219] disabling docker service ...
	I1212 21:10:14.786296   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:14.801656   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:14.814821   61298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:14.950542   61298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:15.085306   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:15.098508   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:15.118634   61298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:15.118709   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.130579   61298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:15.130667   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.140672   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.150340   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.161966   61298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:15.173049   61298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:15.181620   61298 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:15.181703   61298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:15.195505   61298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:15.204076   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:15.327587   61298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:15.505003   61298 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:15.505078   61298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:15.512282   61298 start.go:543] Will wait 60s for crictl version
	I1212 21:10:15.512349   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:10:15.516564   61298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:15.556821   61298 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:15.556906   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.612743   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.665980   61298 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:10:12.426883   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.927168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.962834   60948 api_server.go:72] duration metric: took 1.56481721s to wait for apiserver process to appear ...
	I1212 21:10:12.962862   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:12.962890   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.963447   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:12.963489   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.964022   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:13.464393   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:15.667323   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:15.670368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.670769   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:15.670804   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.671037   61298 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:15.675575   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:15.688523   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:10:15.688602   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:15.739601   61298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:10:15.739718   61298 ssh_runner.go:195] Run: which lz4
	I1212 21:10:15.744272   61298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:10:15.749574   61298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:10:15.749612   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:10:12.428614   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.430542   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:16.442797   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.184429   60628 main.go:141] libmachine: (no-preload-343495) Calling .Start
	I1212 21:10:14.184692   60628 main.go:141] libmachine: (no-preload-343495) Ensuring networks are active...
	I1212 21:10:14.186580   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network default is active
	I1212 21:10:14.187398   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network mk-no-preload-343495 is active
	I1212 21:10:14.188587   60628 main.go:141] libmachine: (no-preload-343495) Getting domain xml...
	I1212 21:10:14.189457   60628 main.go:141] libmachine: (no-preload-343495) Creating domain...
	I1212 21:10:15.509306   60628 main.go:141] libmachine: (no-preload-343495) Waiting to get IP...
	I1212 21:10:15.510320   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.510728   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.510772   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.510702   62255 retry.go:31] will retry after 275.567053ms: waiting for machine to come up
	I1212 21:10:15.788793   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.789233   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.789262   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.789193   62255 retry.go:31] will retry after 341.343409ms: waiting for machine to come up
	I1212 21:10:16.131936   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.132427   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.132452   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.132377   62255 retry.go:31] will retry after 302.905542ms: waiting for machine to come up
	I1212 21:10:16.437184   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.437944   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.437968   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.437850   62255 retry.go:31] will retry after 407.178114ms: waiting for machine to come up
	I1212 21:10:16.846738   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.847393   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.847429   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.847349   62255 retry.go:31] will retry after 507.703222ms: waiting for machine to come up
	I1212 21:10:17.357373   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:17.357975   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:17.358005   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:17.357907   62255 retry.go:31] will retry after 920.403188ms: waiting for machine to come up
	I1212 21:10:18.464726   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 21:10:18.464781   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.736922   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.736969   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.736990   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.816132   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.816165   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.964508   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.012996   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.013048   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.464538   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.509558   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.509601   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.965183   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:21.369579   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:10:21.381334   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:10:21.381365   60948 api_server.go:131] duration metric: took 8.418495294s to wait for apiserver health ...
	I1212 21:10:21.381378   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.381385   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.501371   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:21.801933   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:21.827010   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:21.853900   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:17.641827   61298 crio.go:444] Took 1.897583 seconds to copy over tarball
	I1212 21:10:17.641919   61298 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:10:21.283045   61298 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.641094924s)
	I1212 21:10:21.283076   61298 crio.go:451] Took 3.641222 seconds to extract the tarball
	I1212 21:10:21.283088   61298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:10:21.328123   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:21.387894   61298 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:10:21.387923   61298 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:10:21.387996   61298 ssh_runner.go:195] Run: crio config
	I1212 21:10:21.467191   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.467216   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.467255   61298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:21.467278   61298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-171828 NodeName:default-k8s-diff-port-171828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:21.467443   61298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-171828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:21.467537   61298 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-171828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 21:10:21.467596   61298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:10:21.478940   61298 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:21.479024   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:21.492604   61298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 21:10:21.514260   61298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:10:21.535059   61298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 21:10:21.557074   61298 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:21.562765   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:21.578989   61298 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828 for IP: 192.168.72.253
	I1212 21:10:21.579047   61298 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:21.579282   61298 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:21.579383   61298 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:21.579495   61298 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/client.key
	I1212 21:10:21.768212   61298 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key.a1600f99
	I1212 21:10:21.768305   61298 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key
	I1212 21:10:21.768447   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:21.768489   61298 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:21.768504   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:21.768542   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:21.768596   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:21.768625   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:21.768680   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:21.769557   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:21.800794   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:10:21.833001   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:21.864028   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:21.893107   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:21.918580   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:21.944095   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:21.970251   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:21.998947   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:22.027620   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:22.056851   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:22.084321   61298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:22.103273   61298 ssh_runner.go:195] Run: openssl version
	I1212 21:10:22.109518   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:18.932477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:21.431431   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:18.280164   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:18.280656   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:18.280687   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:18.280612   62255 retry.go:31] will retry after 761.825655ms: waiting for machine to come up
	I1212 21:10:19.043686   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:19.044170   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:19.044203   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:19.044117   62255 retry.go:31] will retry after 1.173408436s: waiting for machine to come up
	I1212 21:10:20.218938   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:20.219457   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:20.219488   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:20.219412   62255 retry.go:31] will retry after 1.484817124s: waiting for machine to come up
	I1212 21:10:21.706027   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:21.706505   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:21.706536   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:21.706467   62255 retry.go:31] will retry after 2.260831172s: waiting for machine to come up
	I1212 21:10:22.159195   60948 system_pods.go:59] 7 kube-system pods found
	I1212 21:10:22.284903   60948 system_pods.go:61] "coredns-5644d7b6d9-slvnx" [0db32241-69df-48dc-a60f-6921f9c5746f] Running
	I1212 21:10:22.284916   60948 system_pods.go:61] "etcd-old-k8s-version-372099" [72d219cb-b393-423d-ba62-b880bd2d26a0] Running
	I1212 21:10:22.284924   60948 system_pods.go:61] "kube-apiserver-old-k8s-version-372099" [c4f09d2d-07d2-4403-886b-37cb1471e7e5] Running
	I1212 21:10:22.284932   60948 system_pods.go:61] "kube-controller-manager-old-k8s-version-372099" [4a17c60c-2c72-4296-a7e4-0ae05e7bfa39] Running
	I1212 21:10:22.284939   60948 system_pods.go:61] "kube-proxy-5mvzb" [ec7c6540-35e2-4ae4-8592-d797132a8328] Running
	I1212 21:10:22.284945   60948 system_pods.go:61] "kube-scheduler-old-k8s-version-372099" [472284a4-9340-4bbc-8a1f-b9b55f4b0c3c] Running
	I1212 21:10:22.284952   60948 system_pods.go:61] "storage-provisioner" [b9fcec5f-bd1f-4c47-95cd-a9c8e3011e50] Running
	I1212 21:10:22.284961   60948 system_pods.go:74] duration metric: took 431.035724ms to wait for pod list to return data ...
	I1212 21:10:22.284990   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:22.592700   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:22.592734   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:22.592748   60948 node_conditions.go:105] duration metric: took 307.751463ms to run NodePressure ...
	I1212 21:10:22.592770   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:23.483331   60948 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:23.500661   60948 retry.go:31] will retry after 162.846257ms: kubelet not initialised
	I1212 21:10:23.669569   60948 retry.go:31] will retry after 257.344573ms: kubelet not initialised
	I1212 21:10:23.942373   60948 retry.go:31] will retry after 538.191385ms: kubelet not initialised
	I1212 21:10:24.487436   60948 retry.go:31] will retry after 635.824669ms: kubelet not initialised
	I1212 21:10:25.129226   60948 retry.go:31] will retry after 946.117517ms: kubelet not initialised
	I1212 21:10:26.082106   60948 retry.go:31] will retry after 2.374588936s: kubelet not initialised
	I1212 21:10:22.121093   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291519   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291585   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.297989   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:22.309847   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:22.321817   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326715   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326766   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.333001   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:22.345044   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:22.357827   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362795   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362858   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.368864   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:22.380605   61298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:22.385986   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:22.392931   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:22.399683   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:22.407203   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:22.414730   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:22.421808   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:22.430050   61298 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:22.430205   61298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:22.430263   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:22.482907   61298 cri.go:89] found id: ""
	I1212 21:10:22.482981   61298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:22.495001   61298 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:22.495032   61298 kubeadm.go:636] restartCluster start
	I1212 21:10:22.495104   61298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:22.506418   61298 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.508078   61298 kubeconfig.go:92] found "default-k8s-diff-port-171828" server: "https://192.168.72.253:8444"
	I1212 21:10:22.511809   61298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:22.523641   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.523703   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.536887   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.536913   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.536965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.549418   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.050111   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.050218   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.063845   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.550201   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.550303   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.567468   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.050021   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.050193   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.064792   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.550119   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.550213   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.568169   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.049891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.049997   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.063341   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.549592   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.549682   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.564096   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.049596   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.049701   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.063482   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.549680   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.549793   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.563956   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.049482   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.049614   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.062881   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.440487   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:25.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:23.969715   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:23.970242   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:23.970272   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:23.970200   62255 retry.go:31] will retry after 1.769886418s: waiting for machine to come up
	I1212 21:10:25.741628   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:25.742060   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:25.742098   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:25.742014   62255 retry.go:31] will retry after 2.283589137s: waiting for machine to come up
	I1212 21:10:28.462838   60948 retry.go:31] will retry after 1.809333362s: kubelet not initialised
	I1212 21:10:30.278747   60948 retry.go:31] will retry after 4.059791455s: kubelet not initialised
	I1212 21:10:27.550084   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.550176   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.564365   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.049688   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.049771   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.065367   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.549922   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.550009   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.566964   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.049535   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.049643   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.062264   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.549891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.549970   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.563687   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.050397   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.050492   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.065602   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.550210   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.550298   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.562793   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.050281   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.050374   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.064836   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.550407   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.550527   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.563474   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:32.049593   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:32.049689   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:32.062459   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.935166   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:30.429274   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:28.028345   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:28.028796   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:28.028824   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:28.028757   62255 retry.go:31] will retry after 4.021160394s: waiting for machine to come up
	I1212 21:10:32.052992   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:32.053479   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:32.053506   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:32.053442   62255 retry.go:31] will retry after 4.864494505s: waiting for machine to come up
	I1212 21:10:34.344571   60948 retry.go:31] will retry after 9.338953291s: kubelet not initialised
	I1212 21:10:32.524460   61298 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:32.524492   61298 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:32.524523   61298 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:32.524586   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:32.565596   61298 cri.go:89] found id: ""
	I1212 21:10:32.565685   61298 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:32.582458   61298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:32.592539   61298 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:32.592615   61298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603658   61298 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603683   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:32.730418   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.535390   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.742601   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.839081   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.909128   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:33.909209   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:33.928197   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.452146   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.952473   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.452270   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.952431   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.451626   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.482100   61298 api_server.go:72] duration metric: took 2.572973799s to wait for apiserver process to appear ...
	I1212 21:10:36.482125   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:36.482154   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.482833   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.482869   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.483345   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.984105   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:32.433032   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:34.928686   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.930503   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.920697   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921201   60628 main.go:141] libmachine: (no-preload-343495) Found IP for machine: 192.168.61.176
	I1212 21:10:36.921235   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has current primary IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921248   60628 main.go:141] libmachine: (no-preload-343495) Reserving static IP address...
	I1212 21:10:36.921719   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.921757   60628 main.go:141] libmachine: (no-preload-343495) DBG | skip adding static IP to network mk-no-preload-343495 - found existing host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"}
	I1212 21:10:36.921770   60628 main.go:141] libmachine: (no-preload-343495) Reserved static IP address: 192.168.61.176
	I1212 21:10:36.921785   60628 main.go:141] libmachine: (no-preload-343495) Waiting for SSH to be available...
	I1212 21:10:36.921802   60628 main.go:141] libmachine: (no-preload-343495) DBG | Getting to WaitForSSH function...
	I1212 21:10:36.924581   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.924908   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.924941   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.925154   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH client type: external
	I1212 21:10:36.925191   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa (-rw-------)
	I1212 21:10:36.925223   60628 main.go:141] libmachine: (no-preload-343495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:36.925234   60628 main.go:141] libmachine: (no-preload-343495) DBG | About to run SSH command:
	I1212 21:10:36.925246   60628 main.go:141] libmachine: (no-preload-343495) DBG | exit 0
	I1212 21:10:37.059619   60628 main.go:141] libmachine: (no-preload-343495) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:37.060017   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetConfigRaw
	I1212 21:10:37.060752   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.063599   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064325   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.064365   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064468   60628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/config.json ...
	I1212 21:10:37.064705   60628 machine.go:88] provisioning docker machine ...
	I1212 21:10:37.064733   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:37.064938   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065115   60628 buildroot.go:166] provisioning hostname "no-preload-343495"
	I1212 21:10:37.065144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065286   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.068118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068517   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.068548   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.068980   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069141   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069312   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.069507   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.069958   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.069985   60628 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-343495 && echo "no-preload-343495" | sudo tee /etc/hostname
	I1212 21:10:37.212905   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-343495
	
	I1212 21:10:37.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.215789   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216147   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.216182   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.216525   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216704   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216877   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.217037   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.217425   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.217444   60628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-343495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-343495/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-343495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:37.355687   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:37.355721   60628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:37.355754   60628 buildroot.go:174] setting up certificates
	I1212 21:10:37.355767   60628 provision.go:83] configureAuth start
	I1212 21:10:37.355780   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.356089   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.359197   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359644   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.359717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359937   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.362695   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363043   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.363079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363251   60628 provision.go:138] copyHostCerts
	I1212 21:10:37.363316   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:37.363336   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:37.363410   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:37.363536   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:37.363549   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:37.363585   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:37.363671   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:37.363677   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:37.363703   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:37.363757   60628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.no-preload-343495 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-343495]
	I1212 21:10:37.526121   60628 provision.go:172] copyRemoteCerts
	I1212 21:10:37.526205   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:37.526234   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.529079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529425   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.529492   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529659   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.529850   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.530009   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.530153   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:37.632384   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:37.661242   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:10:37.689215   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:10:37.714781   60628 provision.go:86] duration metric: configureAuth took 358.999712ms
	I1212 21:10:37.714819   60628 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:37.715040   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:10:37.715144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.718379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.718815   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.718844   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.719212   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.719422   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719625   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719789   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.719975   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.720484   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.720519   60628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:38.062630   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:38.062660   60628 machine.go:91] provisioned docker machine in 997.934774ms
	I1212 21:10:38.062673   60628 start.go:300] post-start starting for "no-preload-343495" (driver="kvm2")
	I1212 21:10:38.062687   60628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:38.062707   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.062999   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:38.063033   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.065898   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066299   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.066331   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066626   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.066878   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.067063   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.067228   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.164612   60628 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:38.170132   60628 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:38.170162   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:38.170244   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:38.170351   60628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:38.170467   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:38.181959   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:38.208734   60628 start.go:303] post-start completed in 146.045424ms
	I1212 21:10:38.208762   60628 fix.go:56] fixHost completed within 24.051421131s
	I1212 21:10:38.208782   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.212118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212519   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.212551   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212732   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213124   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213268   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.213436   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:38.213801   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:38.213827   60628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:38.337185   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415438.279018484
	
	I1212 21:10:38.337225   60628 fix.go:206] guest clock: 1702415438.279018484
	I1212 21:10:38.337239   60628 fix.go:219] Guest: 2023-12-12 21:10:38.279018484 +0000 UTC Remote: 2023-12-12 21:10:38.208766005 +0000 UTC m=+370.324656490 (delta=70.252479ms)
	I1212 21:10:38.337264   60628 fix.go:190] guest clock delta is within tolerance: 70.252479ms
	I1212 21:10:38.337275   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 24.179969571s
	I1212 21:10:38.337305   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.337527   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:38.340658   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341019   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.341053   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341233   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.341952   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342179   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342291   60628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:38.342336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.342388   60628 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:38.342413   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.345379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345419   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345762   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345809   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345841   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345864   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.346049   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346055   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346433   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346438   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346597   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.346596   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.467200   60628 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:38.475578   60628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:38.627838   60628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:38.634520   60628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:38.634614   60628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:38.654823   60628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:38.654847   60628 start.go:475] detecting cgroup driver to use...
	I1212 21:10:38.654928   60628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:38.673550   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:38.691252   60628 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:38.691318   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:38.707542   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:38.724686   60628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:38.843033   60628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:38.973535   60628 docker.go:219] disabling docker service ...
	I1212 21:10:38.973610   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:38.987940   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:39.001346   60628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:39.105401   60628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:39.209198   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:39.222268   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:39.243154   60628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:39.243226   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.253418   60628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:39.253497   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.263273   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.274546   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.284359   60628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:39.294828   60628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:39.304818   60628 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:39.304894   60628 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:39.318541   60628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:39.328819   60628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:39.439285   60628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:39.619385   60628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:39.619462   60628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:39.625279   60628 start.go:543] Will wait 60s for crictl version
	I1212 21:10:39.625358   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:39.630234   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:39.680505   60628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:39.680579   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.736272   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.796111   60628 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:10:39.732208   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.732243   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.732258   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.761735   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.761771   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.984129   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.990620   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:39.990650   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.484444   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.492006   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:40.492039   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.983459   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.990813   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:10:41.001024   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:10:41.001055   61298 api_server.go:131] duration metric: took 4.518922579s to wait for apiserver health ...
	I1212 21:10:41.001070   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:41.001078   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:41.003043   61298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:41.004669   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:41.084775   61298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:41.173688   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:41.201100   61298 system_pods.go:59] 9 kube-system pods found
	I1212 21:10:41.201132   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201140   61298 system_pods.go:61] "coredns-5dd5756b68-hc52p" [f8895d1e-3484-4ffe-9d11-f5e4b7617c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201148   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:10:41.201158   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:10:41.201165   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:10:41.201171   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:10:41.201177   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:10:41.201182   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:10:41.201187   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:10:41.201193   61298 system_pods.go:74] duration metric: took 27.476871ms to wait for pod list to return data ...
	I1212 21:10:41.201203   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:41.205597   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:41.205624   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:41.205638   61298 node_conditions.go:105] duration metric: took 4.431218ms to run NodePressure ...
	I1212 21:10:41.205653   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:41.516976   61298 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529555   61298 kubeadm.go:787] kubelet initialised
	I1212 21:10:41.529592   61298 kubeadm.go:788] duration metric: took 12.533051ms waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529601   61298 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:41.538991   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.546618   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546645   61298 pod_ready.go:81] duration metric: took 7.620954ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.546658   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546667   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.556921   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556951   61298 pod_ready.go:81] duration metric: took 10.273719ms waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.556963   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556972   61298 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.563538   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563570   61298 pod_ready.go:81] duration metric: took 6.584443ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.563586   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563598   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.578973   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579009   61298 pod_ready.go:81] duration metric: took 15.402148ms waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.579025   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579046   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.978938   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978972   61298 pod_ready.go:81] duration metric: took 399.914995ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.978990   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978999   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:38.930743   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:41.429587   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:39.798106   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:39.800962   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801364   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:39.801399   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801592   60628 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:39.806328   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:39.821949   60628 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:10:39.822014   60628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:39.873704   60628 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:10:39.873733   60628 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:10:39.873820   60628 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.873840   60628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.873859   60628 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:39.874021   60628 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.874062   60628 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 21:10:39.874043   60628 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.873836   60628 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.874359   60628 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.875369   60628 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875379   60628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.875390   60628 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 21:10:39.875428   60628 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.875284   60628 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.875803   60628 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.060906   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.061267   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.063065   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.074673   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 21:10:40.076082   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.080787   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.108962   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.169237   60628 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 21:10:40.169289   60628 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.169363   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.172419   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.251588   60628 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 21:10:40.251638   60628 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.251684   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.264051   60628 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 21:10:40.264146   60628 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.264227   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397546   60628 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 21:10:40.397590   60628 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.397640   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397669   60628 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 21:10:40.397709   60628 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.397774   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397876   60628 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 21:10:40.397978   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.398033   60628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:10:40.398064   60628 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.398079   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.398105   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397976   60628 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.398142   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.398143   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.418430   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.418500   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.530581   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530693   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.530781   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530584   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 21:10:40.530918   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:40.544770   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.544970   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 21:10:40.545108   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:40.567016   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567130   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567196   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.567297   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.604461   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 21:10:40.604484   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.604531   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 21:10:40.604488   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604644   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604590   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.612665   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 21:10:40.612741   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612794   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612800   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 21:10:40.612935   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:40.615786   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 21:10:42.378453   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378486   61298 pod_ready.go:81] duration metric: took 399.478547ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.378499   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378508   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:42.778834   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778871   61298 pod_ready.go:81] duration metric: took 400.345358ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.778887   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778897   61298 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:43.179851   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179879   61298 pod_ready.go:81] duration metric: took 400.97377ms waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:43.179891   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179898   61298 pod_ready.go:38] duration metric: took 1.6502873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:43.179913   61298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:10:43.196087   61298 ops.go:34] apiserver oom_adj: -16
	I1212 21:10:43.196114   61298 kubeadm.go:640] restartCluster took 20.701074763s
	I1212 21:10:43.196126   61298 kubeadm.go:406] StartCluster complete in 20.766085453s
	I1212 21:10:43.196146   61298 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.196225   61298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:10:43.198844   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.199122   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:10:43.199268   61298 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:10:43.199342   61298 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199363   61298 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199372   61298 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:10:43.199396   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:43.199456   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199373   61298 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199492   61298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-171828"
	I1212 21:10:43.199389   61298 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199551   61298 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199568   61298 addons.go:240] addon metrics-server should already be in state true
	I1212 21:10:43.199637   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199891   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199915   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.199922   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199945   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.200148   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.200177   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.218067   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I1212 21:10:43.218679   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1212 21:10:43.218817   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219111   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219234   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1212 21:10:43.219356   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219372   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219590   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219607   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219699   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219807   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220061   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220258   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.220278   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.220324   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.220436   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.220488   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.220676   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.221418   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.221444   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.224718   61298 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.224742   61298 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:10:43.224769   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.225189   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.225227   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.225431   61298 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-171828" context rescaled to 1 replicas
	I1212 21:10:43.225467   61298 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:10:43.228523   61298 out.go:177] * Verifying Kubernetes components...
	I1212 21:10:43.230002   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:10:43.239165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1212 21:10:43.239749   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.240357   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.240383   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.240761   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.240937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.241446   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1212 21:10:43.241951   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.242522   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.242541   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.242864   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.242931   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.244753   61298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:43.243219   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.246309   61298 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.246332   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:10:43.246358   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.248809   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.250840   61298 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:10:43.252430   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:10:43.251041   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.250309   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.247068   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I1212 21:10:43.252596   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:10:43.252622   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.252718   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.252745   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.253368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.253677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.253846   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.254434   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.259686   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.259697   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.259748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259844   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.259883   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.259973   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.260149   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.260361   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.260420   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.261546   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.261594   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.284357   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I1212 21:10:43.284945   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.285431   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.285444   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.286009   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.286222   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.288257   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.288542   61298 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.288565   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:10:43.288586   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.291842   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.292527   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.292680   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.293076   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.293350   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.293512   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.293683   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.405154   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.426115   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:10:43.426141   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:10:43.486953   61298 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:10:43.486975   61298 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:43.491689   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:10:43.491709   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:10:43.505611   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.538745   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:43.538785   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:10:43.600598   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:44.933368   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.528176624s)
	I1212 21:10:44.933442   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.427784857s)
	I1212 21:10:44.933493   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933511   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933539   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332913009s)
	I1212 21:10:44.933496   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933559   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933566   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933569   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933926   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.933943   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933944   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.933955   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933964   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933974   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934081   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934096   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934118   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934120   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934127   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934132   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934138   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934156   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934372   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934397   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934401   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934808   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934845   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934858   61298 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-171828"
	I1212 21:10:44.937727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.937783   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.937806   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.945948   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.945966   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.946202   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.946220   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.949385   61298 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1212 21:10:43.688668   60948 retry.go:31] will retry after 13.919612963s: kubelet not initialised
	I1212 21:10:44.951009   61298 addons.go:502] enable addons completed in 1.751742212s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1212 21:10:45.583280   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.432062   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:45.929995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.305027541s)
	I1212 21:10:43.909740   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.296738263s)
	I1212 21:10:43.909764   60628 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:43.909770   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 21:10:43.909810   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:45.879475   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969630074s)
	I1212 21:10:45.879502   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 21:10:45.879527   60628 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:45.879592   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:47.584004   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:50.113807   61298 node_ready.go:49] node "default-k8s-diff-port-171828" has status "Ready":"True"
	I1212 21:10:50.113837   61298 node_ready.go:38] duration metric: took 6.626786171s waiting for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:50.113850   61298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:50.128903   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656130   61298 pod_ready.go:92] pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:50.656153   61298 pod_ready.go:81] duration metric: took 527.212389ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656161   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:47.931716   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.433176   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.267864   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.388242252s)
	I1212 21:10:50.267898   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 21:10:50.267931   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:50.267977   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:52.845895   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.577890173s)
	I1212 21:10:52.845935   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 21:10:52.845969   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.846023   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.677971   61298 pod_ready.go:102] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:53.179154   61298 pod_ready.go:92] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.179186   61298 pod_ready.go:81] duration metric: took 2.523018353s waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.179200   61298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185649   61298 pod_ready.go:92] pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.185673   61298 pod_ready.go:81] duration metric: took 6.463925ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185685   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193280   61298 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.193303   61298 pod_ready.go:81] duration metric: took 1.00761061s waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193313   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484196   61298 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.484223   61298 pod_ready.go:81] duration metric: took 290.902142ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484240   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883746   61298 pod_ready.go:92] pod "kube-proxy-47qmb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.883773   61298 pod_ready.go:81] duration metric: took 399.524854ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883784   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283637   61298 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:55.283670   61298 pod_ready.go:81] duration metric: took 399.871874ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283684   61298 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:52.931372   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.932174   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.204367   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.358317317s)
	I1212 21:10:54.204393   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 21:10:54.204425   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:54.204485   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:56.066774   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.862261726s)
	I1212 21:10:56.066802   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 21:10:56.066825   60628 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:56.066874   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:57.118959   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.052055479s)
	I1212 21:10:57.118985   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 21:10:57.119009   60628 cache_images.go:123] Successfully loaded all cached images
	I1212 21:10:57.119021   60628 cache_images.go:92] LoadImages completed in 17.245274715s
	I1212 21:10:57.119103   60628 ssh_runner.go:195] Run: crio config
	I1212 21:10:57.180068   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:10:57.180093   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:57.180109   60628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:57.180127   60628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-343495 NodeName:no-preload-343495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:57.180250   60628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-343495"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:57.180330   60628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-343495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:10:57.180382   60628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:10:57.191949   60628 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:57.192034   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:57.202921   60628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 21:10:57.219512   60628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:10:57.235287   60628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1212 21:10:57.252278   60628 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:57.256511   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:57.268744   60628 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495 for IP: 192.168.61.176
	I1212 21:10:57.268781   60628 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:57.268959   60628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:57.269032   60628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:57.269133   60628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/client.key
	I1212 21:10:57.269228   60628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key.492ad1cf
	I1212 21:10:57.269316   60628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key
	I1212 21:10:57.269466   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:57.269511   60628 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:57.269526   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:57.269562   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:57.269597   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:57.269629   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:57.269685   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:57.270311   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:57.295960   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:10:57.320157   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:57.344434   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:57.368906   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:57.391830   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:57.415954   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:57.441182   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:57.465055   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:57.489788   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:57.513828   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:57.536138   60628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:57.553168   60628 ssh_runner.go:195] Run: openssl version
	I1212 21:10:57.558771   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:57.570141   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574935   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574990   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.580985   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:57.592528   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:57.603477   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608448   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608511   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.614316   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:57.625667   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:57.637284   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642258   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642323   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.648072   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:57.659762   60628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:57.664517   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:57.670385   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:57.676336   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:57.682074   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:57.688387   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:57.694542   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:57.700400   60628 kubeadm.go:404] StartCluster: {Name:no-preload-343495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:57.700520   60628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:57.700576   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:57.738703   60628 cri.go:89] found id: ""
	I1212 21:10:57.738776   60628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:57.749512   60628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:57.749538   60628 kubeadm.go:636] restartCluster start
	I1212 21:10:57.749610   60628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:57.758905   60628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.760000   60628 kubeconfig.go:92] found "no-preload-343495" server: "https://192.168.61.176:8443"
	I1212 21:10:57.762219   60628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:57.773107   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.773181   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.785478   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.785500   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.785554   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.797412   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.613799   60948 retry.go:31] will retry after 13.009137494s: kubelet not initialised
	I1212 21:10:57.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.591232   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:02.093666   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:57.429861   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.429944   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:01.438267   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:58.297630   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.297712   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.312155   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:58.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.797652   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.809726   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.297677   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.309875   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.798441   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.798531   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.810533   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.298154   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.298237   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.310050   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.797683   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.809712   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.298094   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.298224   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.310181   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.797635   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.797742   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.809336   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.297912   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.297997   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.309215   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.797666   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.797749   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.808815   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.590426   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.590850   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.929977   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.429697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.297975   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.298066   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.308865   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:03.798103   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.798207   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.809553   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.297580   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.297653   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.309100   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.797646   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.797767   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.809269   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.297665   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.309281   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.797809   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.797898   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.809794   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.298381   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.298497   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.309467   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.798050   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.798132   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.809758   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.298354   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:07.298434   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:07.309655   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.773157   60628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:11:07.773216   60628 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:11:07.773229   60628 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:11:07.773290   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:11:07.815986   60628 cri.go:89] found id: ""
	I1212 21:11:07.816068   60628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:11:07.832950   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:11:07.842287   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:11:07.842353   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851694   60628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851720   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:10.630075   60948 kubeadm.go:787] kubelet initialised
	I1212 21:11:10.630105   60948 kubeadm.go:788] duration metric: took 47.146743334s waiting for restarted kubelet to initialise ...
	I1212 21:11:10.630116   60948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:10.637891   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644674   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.644700   60948 pod_ready.go:81] duration metric: took 6.771094ms waiting for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644710   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651801   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.651830   60948 pod_ready.go:81] duration metric: took 7.112566ms waiting for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651845   60948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659678   60948 pod_ready.go:92] pod "etcd-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.659700   60948 pod_ready.go:81] duration metric: took 7.845111ms waiting for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659711   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665929   60948 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.665958   60948 pod_ready.go:81] duration metric: took 6.237833ms waiting for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665972   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028938   60948 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.028961   60948 pod_ready.go:81] duration metric: took 362.981718ms waiting for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028973   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428824   60948 pod_ready.go:92] pod "kube-proxy-5mvzb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.428853   60948 pod_ready.go:81] duration metric: took 399.87314ms waiting for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428866   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828546   60948 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.828578   60948 pod_ready.go:81] duration metric: took 399.696769ms waiting for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828590   60948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:09.094309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:11.098257   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:08.928635   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:10.929896   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:07.988857   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.772924   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.980401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.108938   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.189716   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:11:09.189780   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.201432   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.722085   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.222325   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.721931   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.222186   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.721642   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.745977   60628 api_server.go:72] duration metric: took 2.556260463s to wait for apiserver process to appear ...
	I1212 21:11:11.746005   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:11:11.746025   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:14.135897   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.138482   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:13.590920   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.591230   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:12.931314   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.429327   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.294367   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.294401   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.294413   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.347744   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.347780   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.848435   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.853773   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:16.853823   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.348312   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.359543   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.359579   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.848425   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.853966   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.854006   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:18.348644   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:18.373028   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:11:18.385301   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:11:18.385341   60628 api_server.go:131] duration metric: took 6.639327054s to wait for apiserver health ...
	I1212 21:11:18.385353   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:11:18.385362   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:11:18.387289   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:11:18.636422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:20.636472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.592197   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.593157   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:21.594049   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.434254   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.930697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:18.388998   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:11:18.449634   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:11:18.491001   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:11:18.517694   60628 system_pods.go:59] 8 kube-system pods found
	I1212 21:11:18.517729   60628 system_pods.go:61] "coredns-76f75df574-s9jgn" [b13d32b4-a44b-4f79-bece-d0adafef4c7c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:11:18.517740   60628 system_pods.go:61] "etcd-no-preload-343495" [ad48db04-9c79-48e9-a001-1a9061c43cb9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:11:18.517754   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [24d024c1-a89f-4ede-8dbf-7502f0179cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:11:18.517760   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [10ce49e3-2679-4ac5-89aa-9179582ae778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:11:18.517765   60628 system_pods.go:61] "kube-proxy-492l6" [3a2bbe46-0506-490f-aae8-a97e48f3205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:11:18.517773   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [bca80470-c204-4a34-9c7d-5de3ad382c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:11:18.517778   60628 system_pods.go:61] "metrics-server-57f55c9bc5-tmmk4" [11066021-353e-418e-9c7f-78e72dae44a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:11:18.517785   60628 system_pods.go:61] "storage-provisioner" [e681d4cd-f2f6-4cf3-ba09-0f361a64aafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:11:18.517794   60628 system_pods.go:74] duration metric: took 26.756848ms to wait for pod list to return data ...
	I1212 21:11:18.517815   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:11:18.521330   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:11:18.521362   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:11:18.521377   60628 node_conditions.go:105] duration metric: took 3.557177ms to run NodePressure ...
	I1212 21:11:18.521401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:18.945267   60628 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958848   60628 kubeadm.go:787] kubelet initialised
	I1212 21:11:18.958877   60628 kubeadm.go:788] duration metric: took 13.578451ms waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958886   60628 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:18.964819   60628 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:20.987111   60628 pod_ready.go:102] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.494268   60628 pod_ready.go:92] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:22.494299   60628 pod_ready.go:81] duration metric: took 3.529452237s waiting for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:22.494311   60628 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:23.136140   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:25.635800   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.093215   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.590861   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.429921   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.928565   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.929668   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.514490   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.013783   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.637165   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:30.133948   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.091057   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.598428   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:28.930654   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.428436   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.514918   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.514945   60628 pod_ready.go:81] duration metric: took 7.020626508s waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.514955   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524669   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.524696   60628 pod_ready.go:81] duration metric: took 9.734059ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524709   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541808   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.541830   60628 pod_ready.go:81] duration metric: took 17.113672ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541839   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553955   60628 pod_ready.go:92] pod "kube-proxy-492l6" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.553979   60628 pod_ready.go:81] duration metric: took 12.134143ms waiting for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553988   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562798   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.562835   60628 pod_ready.go:81] duration metric: took 8.836628ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562850   60628 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:31.818614   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:32.134558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.135376   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.634429   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.090158   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.091290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.429336   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:35.430448   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.819222   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.318847   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.637527   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:41.134980   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.115262   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.591502   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:37.929700   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:39.929830   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.318911   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.319619   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.319750   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.135558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.635174   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.090309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.590529   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.434126   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.931810   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.818997   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.321699   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.635472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.636294   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.640471   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.590577   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.590885   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.591122   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.429836   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.431518   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.928631   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.823419   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:52.319752   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.137390   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.634152   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.593196   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.089777   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.929750   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:55.932860   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.321554   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.819877   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.136605   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.092816   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.591682   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.429543   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.432255   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:59.318053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.325068   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.137023   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.635397   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.091397   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.094195   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:02.933370   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.430020   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.819751   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:06.319806   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.137648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.635154   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.591471   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.091503   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.430684   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:09.929393   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.319984   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.821053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.637206   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.136850   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.590992   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.591391   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.591744   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.429299   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.429724   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.430114   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:13.329939   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.820117   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.820519   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.199675   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.635179   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.635426   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.091628   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.091739   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:18.929340   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.929933   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.319134   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.819399   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:24.133408   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:26.134293   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:23.093543   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.591828   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.930710   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.434148   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.319949   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.337078   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.134422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.137461   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.090730   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.092555   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.928685   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.929200   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.929272   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.819461   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.819541   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.633893   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.636198   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.636373   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.590019   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.590953   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.591420   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.929488   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:35.929671   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.819661   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.322177   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.137315   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.635168   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.097607   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.590836   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:37.930820   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.930916   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:38.324332   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:40.819395   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.819784   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.640489   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.134648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.590910   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.592083   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.429717   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:44.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.431053   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.320122   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:47.819547   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.135328   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.137213   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.091979   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.093149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.929529   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:51.428177   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.319560   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.820242   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.635136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.637000   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.591430   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.090634   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:53.429307   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.429455   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.821647   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.319971   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.135608   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.137606   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.634197   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.590565   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:00.091074   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.429785   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.928834   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.818255   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.819526   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:03.635008   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.134591   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.591023   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.592260   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.092331   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.430411   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.930385   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.326885   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.822828   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:08.135379   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:10.136957   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.590114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.593478   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.434219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.929736   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.930477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.322955   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.819793   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:12.137554   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.635349   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.637857   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.092558   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.591772   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.429362   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.931219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.319867   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.325224   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.135196   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.634789   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.090842   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.591235   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.929464   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:18.326463   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:20.819839   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:22.820060   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.636879   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.135188   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.591676   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.591833   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.929811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.429286   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.319356   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.819668   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.634130   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.635441   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.591961   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.090560   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:32.091429   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.929344   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.929561   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:29.820548   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:31.820901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.134798   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.635317   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.094290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.589895   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.429811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.429995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.319447   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.822690   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.636833   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.136281   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:38.591586   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.090302   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.929337   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.428532   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:39.321656   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.820917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.635037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.135037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:43.091587   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.590322   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.429616   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.430483   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.431960   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.319403   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.326448   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.136136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:49.635013   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.635308   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.592114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:50.089825   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:52.090721   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.928619   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.429031   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.820121   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.319794   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.134872   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:54.589746   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.590432   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.429817   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:55.929211   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.820666   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.322986   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.135622   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.139553   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.592602   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:01.091154   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:57.929777   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:59.930300   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.818901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.819587   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.634488   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.636059   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:03.591886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:06.091886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.432472   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.125384   60833 pod_ready.go:81] duration metric: took 4m0.000960425s waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:05.125428   60833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:05.125437   60833 pod_ready.go:38] duration metric: took 4m2.799403108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:05.125453   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:05.125518   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:05.125592   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:05.203017   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:05.203045   60833 cri.go:89] found id: ""
	I1212 21:14:05.203054   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:05.203115   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.208622   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:05.208693   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:05.250079   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.250102   60833 cri.go:89] found id: ""
	I1212 21:14:05.250118   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:05.250161   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.254870   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:05.254946   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:05.323718   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:05.323748   60833 cri.go:89] found id: ""
	I1212 21:14:05.323757   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:05.323819   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.328832   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:05.328902   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:05.372224   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.372252   60833 cri.go:89] found id: ""
	I1212 21:14:05.372262   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:05.372316   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.377943   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:05.378007   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:05.417867   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:05.417894   60833 cri.go:89] found id: ""
	I1212 21:14:05.417905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:05.417961   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.422198   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:05.422264   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:05.462031   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:05.462052   60833 cri.go:89] found id: ""
	I1212 21:14:05.462059   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:05.462114   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.466907   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:05.466962   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:05.512557   60833 cri.go:89] found id: ""
	I1212 21:14:05.512585   60833 logs.go:284] 0 containers: []
	W1212 21:14:05.512592   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:05.512597   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:05.512663   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:05.553889   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.553914   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:05.553921   60833 cri.go:89] found id: ""
	I1212 21:14:05.553929   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:05.553982   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.558864   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.563550   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:05.563572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:05.627093   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:05.627135   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:05.642800   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:05.642827   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:05.820642   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:05.820683   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.871256   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:05.871299   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.913399   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:05.913431   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.955061   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:05.955103   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:06.012639   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:06.012681   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:06.057933   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:06.057970   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:06.110367   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:06.110400   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:06.173711   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:06.173746   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:06.214291   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:06.214328   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:06.260105   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:06.260142   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:03.320010   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.321011   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.821313   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.134137   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.635405   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:08.591048   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:10.593286   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.219373   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:09.237985   60833 api_server.go:72] duration metric: took 4m14.403294004s to wait for apiserver process to appear ...
	I1212 21:14:09.238014   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:09.238057   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:09.238119   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:09.281005   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:09.281028   60833 cri.go:89] found id: ""
	I1212 21:14:09.281037   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:09.281097   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.285354   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:09.285436   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:09.336833   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.336864   60833 cri.go:89] found id: ""
	I1212 21:14:09.336874   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:09.336937   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.342850   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:09.342928   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:09.387107   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.387133   60833 cri.go:89] found id: ""
	I1212 21:14:09.387143   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:09.387202   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.392729   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:09.392806   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:09.433197   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:09.433225   60833 cri.go:89] found id: ""
	I1212 21:14:09.433232   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:09.433281   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.438043   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:09.438092   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:09.486158   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.486185   60833 cri.go:89] found id: ""
	I1212 21:14:09.486200   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:09.486255   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.491667   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:09.491735   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:09.536085   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:09.536108   60833 cri.go:89] found id: ""
	I1212 21:14:09.536114   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:09.536165   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.540939   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:09.541008   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:09.585160   60833 cri.go:89] found id: ""
	I1212 21:14:09.585187   60833 logs.go:284] 0 containers: []
	W1212 21:14:09.585195   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:09.585200   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:09.585254   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:09.628972   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:09.629001   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:09.629008   60833 cri.go:89] found id: ""
	I1212 21:14:09.629017   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:09.629075   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.634242   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.639308   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:09.639344   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:09.766299   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:09.766329   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.816655   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:09.816699   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.863184   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:09.863212   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.924345   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:09.924382   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:10.363852   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:10.363897   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:10.417375   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:10.417407   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:10.432758   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:10.432788   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:10.483732   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:10.483778   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:10.538254   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:10.538283   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:10.598142   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:10.598174   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:10.650678   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:10.650710   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:10.697971   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:10.698000   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:10.318636   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.321917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.134600   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:14.134822   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.634845   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.091008   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:15.589901   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.241720   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:14:13.248465   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:14:13.249814   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:14:13.249839   60833 api_server.go:131] duration metric: took 4.011816395s to wait for apiserver health ...
	I1212 21:14:13.249848   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:14:13.249871   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:13.249916   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:13.300138   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:13.300161   60833 cri.go:89] found id: ""
	I1212 21:14:13.300171   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:13.300228   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.306350   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:13.306424   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:13.358644   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.358667   60833 cri.go:89] found id: ""
	I1212 21:14:13.358676   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:13.358737   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.363921   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:13.363989   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:13.413339   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:13.413366   60833 cri.go:89] found id: ""
	I1212 21:14:13.413374   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:13.413420   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.418188   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:13.418248   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:13.461495   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:13.461522   60833 cri.go:89] found id: ""
	I1212 21:14:13.461532   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:13.461581   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.465878   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:13.465951   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:13.511866   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.511895   60833 cri.go:89] found id: ""
	I1212 21:14:13.511905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:13.511960   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.516312   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:13.516381   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:13.560993   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.561023   60833 cri.go:89] found id: ""
	I1212 21:14:13.561034   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:13.561092   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.565439   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:13.565514   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:13.608401   60833 cri.go:89] found id: ""
	I1212 21:14:13.608434   60833 logs.go:284] 0 containers: []
	W1212 21:14:13.608445   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:13.608452   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:13.608507   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:13.661929   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:13.661956   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:13.661963   60833 cri.go:89] found id: ""
	I1212 21:14:13.661972   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:13.662036   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.667039   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.671770   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:13.671791   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:13.793637   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:13.793671   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.844253   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:13.844286   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.886965   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:13.886997   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.946537   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:13.946572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:13.999732   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:13.999769   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:14.015819   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:14.015849   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:14.063649   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:14.063684   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:14.116465   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:14.116499   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:14.179838   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:14.179875   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:14.224213   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:14.224243   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:14.262832   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:14.262858   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:14.307981   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:14.308008   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:17.188864   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:14:17.188919   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.188927   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.188934   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.188943   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.188950   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.188959   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.188980   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.188988   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.188996   60833 system_pods.go:74] duration metric: took 3.939142839s to wait for pod list to return data ...
	I1212 21:14:17.189005   60833 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:14:17.192352   60833 default_sa.go:45] found service account: "default"
	I1212 21:14:17.192390   60833 default_sa.go:55] duration metric: took 3.37914ms for default service account to be created ...
	I1212 21:14:17.192400   60833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:14:17.198396   60833 system_pods.go:86] 8 kube-system pods found
	I1212 21:14:17.198424   60833 system_pods.go:89] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.198429   60833 system_pods.go:89] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.198433   60833 system_pods.go:89] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.198438   60833 system_pods.go:89] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.198442   60833 system_pods.go:89] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.198446   60833 system_pods.go:89] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.198455   60833 system_pods.go:89] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.198459   60833 system_pods.go:89] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.198466   60833 system_pods.go:126] duration metric: took 6.060971ms to wait for k8s-apps to be running ...
	I1212 21:14:17.198473   60833 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:14:17.198513   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:14:17.217190   60833 system_svc.go:56] duration metric: took 18.71037ms WaitForService to wait for kubelet.
	I1212 21:14:17.217224   60833 kubeadm.go:581] duration metric: took 4m22.382539055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:14:17.217249   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:14:17.221504   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:14:17.221540   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:14:17.221555   60833 node_conditions.go:105] duration metric: took 4.300742ms to run NodePressure ...
	I1212 21:14:17.221569   60833 start.go:228] waiting for startup goroutines ...
	I1212 21:14:17.221577   60833 start.go:233] waiting for cluster config update ...
	I1212 21:14:17.221594   60833 start.go:242] writing updated cluster config ...
	I1212 21:14:17.221939   60833 ssh_runner.go:195] Run: rm -f paused
	I1212 21:14:17.277033   60833 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:14:17.279044   60833 out.go:177] * Done! kubectl is now configured to use "embed-certs-831188" cluster and "default" namespace by default
	I1212 21:14:14.818262   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.823731   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:18.634990   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.135517   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:17.593149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:20.091419   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:22.091781   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:19.320462   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.819129   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.636400   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.134084   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:24.591552   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:27.090974   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.825879   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.318691   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.135741   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:30.635812   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:29.091882   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.590150   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.819815   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.319140   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.134738   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:35.637961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.591929   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.091976   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.819872   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.325409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.139066   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.635659   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:41.090674   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.819966   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.820281   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.135071   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.635762   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.091695   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.595126   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.323343   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.819822   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.134846   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.135229   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.092328   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.591470   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.319483   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.819702   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.135550   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:54.634163   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:56.634961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.593957   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.091338   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.284411   61298 pod_ready.go:81] duration metric: took 4m0.000712263s waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:55.284453   61298 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:55.284462   61298 pod_ready.go:38] duration metric: took 4m5.170596318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:55.284486   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:55.284536   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:55.284595   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:55.345012   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:55.345043   61298 cri.go:89] found id: ""
	I1212 21:14:55.345055   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:55.345118   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.350261   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:55.350329   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:55.403088   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:55.403116   61298 cri.go:89] found id: ""
	I1212 21:14:55.403124   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:55.403169   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.408043   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:55.408103   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:55.449581   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:55.449608   61298 cri.go:89] found id: ""
	I1212 21:14:55.449615   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:55.449670   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.454762   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:55.454828   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:55.502919   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:55.502960   61298 cri.go:89] found id: ""
	I1212 21:14:55.502970   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:55.503050   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.508035   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:55.508101   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:55.550335   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:55.550365   61298 cri.go:89] found id: ""
	I1212 21:14:55.550376   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:55.550437   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.554985   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:55.555043   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:55.599678   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:55.599707   61298 cri.go:89] found id: ""
	I1212 21:14:55.599716   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:55.599772   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.604830   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:55.604913   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:55.651733   61298 cri.go:89] found id: ""
	I1212 21:14:55.651767   61298 logs.go:284] 0 containers: []
	W1212 21:14:55.651774   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:55.651779   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:55.651825   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:55.690691   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:55.690716   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.690723   61298 cri.go:89] found id: ""
	I1212 21:14:55.690732   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:55.690778   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.695227   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.699700   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:14:55.699723   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:55.751176   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:14:55.751210   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.789388   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:55.789417   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:56.270250   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:56.270296   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:56.315517   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:56.315549   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:56.377591   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:14:56.377648   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:56.432089   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:56.432124   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:56.496004   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:56.496038   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:56.543979   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:56.544010   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:56.599613   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:14:56.599644   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:56.646113   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:14:56.646146   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:56.693081   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:56.693111   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:56.709557   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:56.709591   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:53.319742   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.320811   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:57.820478   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.134092   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:01.135233   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.366965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:59.385251   61298 api_server.go:72] duration metric: took 4m16.159743319s to wait for apiserver process to appear ...
	I1212 21:14:59.385280   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:59.385317   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:59.385365   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:59.433011   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:59.433038   61298 cri.go:89] found id: ""
	I1212 21:14:59.433047   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:59.433088   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.438059   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:59.438136   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:59.477000   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.477078   61298 cri.go:89] found id: ""
	I1212 21:14:59.477093   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:59.477153   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.481716   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:59.481791   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:59.526936   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.526966   61298 cri.go:89] found id: ""
	I1212 21:14:59.526975   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:59.527037   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.535907   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:59.535985   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:59.580818   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:59.580848   61298 cri.go:89] found id: ""
	I1212 21:14:59.580856   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:59.580916   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.585685   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:59.585733   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:59.640697   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:59.640721   61298 cri.go:89] found id: ""
	I1212 21:14:59.640731   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:59.640798   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.644940   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:59.645004   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:59.687873   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.687901   61298 cri.go:89] found id: ""
	I1212 21:14:59.687910   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:59.687960   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.692382   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:59.692442   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:59.735189   61298 cri.go:89] found id: ""
	I1212 21:14:59.735225   61298 logs.go:284] 0 containers: []
	W1212 21:14:59.735235   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:59.735256   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:59.735323   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:59.778668   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:59.778702   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:59.778708   61298 cri.go:89] found id: ""
	I1212 21:14:59.778717   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:59.778773   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.782827   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.787277   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:59.787303   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:59.802470   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:59.802499   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.864191   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:59.864225   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.911007   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:59.911032   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.975894   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:59.975932   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:00.021750   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:00.021780   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:00.061527   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:00.061557   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:00.484318   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:00.484359   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:00.549321   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:00.549357   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:00.600589   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:00.600629   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:00.643660   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:00.643690   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:00.698016   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:00.698047   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:00.741819   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:00.741850   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:00.319685   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:02.320017   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.136545   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:05.635632   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.383318   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:15:03.389750   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:15:03.391084   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:15:03.391117   61298 api_server.go:131] duration metric: took 4.005829911s to wait for apiserver health ...
	I1212 21:15:03.391155   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:03.391181   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:15:03.391262   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:15:03.438733   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:03.438754   61298 cri.go:89] found id: ""
	I1212 21:15:03.438762   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:15:03.438809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.443133   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:15:03.443203   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:15:03.488960   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.488990   61298 cri.go:89] found id: ""
	I1212 21:15:03.489001   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:15:03.489058   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.493741   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:15:03.493807   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:15:03.541286   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:03.541316   61298 cri.go:89] found id: ""
	I1212 21:15:03.541325   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:15:03.541387   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.545934   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:15:03.546008   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:15:03.585937   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.585962   61298 cri.go:89] found id: ""
	I1212 21:15:03.585971   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:15:03.586039   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.590444   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:15:03.590516   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:15:03.626793   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:03.626826   61298 cri.go:89] found id: ""
	I1212 21:15:03.626835   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:15:03.626894   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.631829   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:15:03.631906   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:15:03.676728   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:03.676750   61298 cri.go:89] found id: ""
	I1212 21:15:03.676758   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:15:03.676809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.681068   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:15:03.681123   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:15:03.723403   61298 cri.go:89] found id: ""
	I1212 21:15:03.723430   61298 logs.go:284] 0 containers: []
	W1212 21:15:03.723437   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:15:03.723442   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:15:03.723502   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:15:03.772837   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.772868   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.772875   61298 cri.go:89] found id: ""
	I1212 21:15:03.772884   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:15:03.772940   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.777274   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.782354   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:15:03.782379   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.823892   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:03.823919   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.866656   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:15:03.866689   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.920757   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:03.920798   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.963737   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:03.963766   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:04.005559   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:15:04.005582   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:04.054868   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:04.054901   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:04.118941   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:04.118973   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:04.188272   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:15:04.188314   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:04.230473   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:15:04.230502   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:15:04.247069   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:04.247097   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:04.425844   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:04.425877   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:04.492751   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:04.492789   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:07.374768   61298 system_pods.go:59] 8 kube-system pods found
	I1212 21:15:07.374796   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.374801   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.374806   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.374810   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.374814   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.374818   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.374823   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.374828   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.374835   61298 system_pods.go:74] duration metric: took 3.983674471s to wait for pod list to return data ...
	I1212 21:15:07.374842   61298 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:07.377370   61298 default_sa.go:45] found service account: "default"
	I1212 21:15:07.377391   61298 default_sa.go:55] duration metric: took 2.542814ms for default service account to be created ...
	I1212 21:15:07.377400   61298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:07.384723   61298 system_pods.go:86] 8 kube-system pods found
	I1212 21:15:07.384751   61298 system_pods.go:89] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.384758   61298 system_pods.go:89] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.384767   61298 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.384776   61298 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.384782   61298 system_pods.go:89] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.384789   61298 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.384800   61298 system_pods.go:89] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.384809   61298 system_pods.go:89] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.384824   61298 system_pods.go:126] duration metric: took 7.416446ms to wait for k8s-apps to be running ...
	I1212 21:15:07.384838   61298 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:15:07.384893   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:07.402316   61298 system_svc.go:56] duration metric: took 17.466449ms WaitForService to wait for kubelet.
	I1212 21:15:07.402350   61298 kubeadm.go:581] duration metric: took 4m24.176848962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:15:07.402367   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:15:07.405566   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:15:07.405598   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:15:07.405616   61298 node_conditions.go:105] duration metric: took 3.244651ms to run NodePressure ...
	I1212 21:15:07.405628   61298 start.go:228] waiting for startup goroutines ...
	I1212 21:15:07.405637   61298 start.go:233] waiting for cluster config update ...
	I1212 21:15:07.405649   61298 start.go:242] writing updated cluster config ...
	I1212 21:15:07.405956   61298 ssh_runner.go:195] Run: rm -f paused
	I1212 21:15:07.457339   61298 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:15:07.459492   61298 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-171828" cluster and "default" namespace by default
	I1212 21:15:04.820409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:07.323802   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:08.135943   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:10.633863   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:11.829177   60948 pod_ready.go:81] duration metric: took 4m0.000566874s waiting for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:11.829231   60948 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:11.829268   60948 pod_ready.go:38] duration metric: took 4m1.1991406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:11.829314   60948 kubeadm.go:640] restartCluster took 5m11.909727716s
	W1212 21:15:11.829387   60948 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:11.829425   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:09.824487   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:12.319761   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:14.818898   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:16.822843   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:18.398899   60948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.569443116s)
	I1212 21:15:18.398988   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:18.421423   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:18.437661   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:18.459692   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:18.459747   60948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 21:15:18.529408   60948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1212 21:15:18.529485   60948 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:18.690865   60948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:18.691034   60948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:18.691165   60948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:18.939806   60948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:18.939966   60948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:18.949943   60948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1212 21:15:19.070931   60948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:19.072676   60948 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:19.072783   60948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:19.072868   60948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:19.072976   60948 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:19.073053   60948 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:19.073145   60948 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:19.073253   60948 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:19.073367   60948 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:19.073461   60948 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:19.073562   60948 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:19.073669   60948 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:19.073732   60948 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:19.073833   60948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:19.136565   60948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:19.614416   60948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:19.754535   60948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:20.149412   60948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:20.150707   60948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:20.152444   60948 out.go:204]   - Booting up control plane ...
	I1212 21:15:20.152579   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:20.158445   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:20.162012   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:20.162125   60948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:20.163852   60948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:19.321950   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:21.334725   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:23.820711   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:26.320918   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.174689   60948 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.007313 seconds
	I1212 21:15:29.174814   60948 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:29.189641   60948 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:29.715080   60948 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:29.715312   60948 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-372099 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 21:15:30.225103   60948 kubeadm.go:322] [bootstrap-token] Using token: h843b5.c34afz2u52stqeoc
	I1212 21:15:30.226707   60948 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:30.226873   60948 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:30.237412   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:30.245755   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:30.252764   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:30.259184   60948 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:30.405726   60948 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:30.647756   60948 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:30.647812   60948 kubeadm.go:322] 
	I1212 21:15:30.647908   60948 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:30.647920   60948 kubeadm.go:322] 
	I1212 21:15:30.648030   60948 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:30.648040   60948 kubeadm.go:322] 
	I1212 21:15:30.648076   60948 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:30.648155   60948 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:30.648219   60948 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:30.648229   60948 kubeadm.go:322] 
	I1212 21:15:30.648358   60948 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:30.648477   60948 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:30.648571   60948 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:30.648582   60948 kubeadm.go:322] 
	I1212 21:15:30.648698   60948 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1212 21:15:30.648813   60948 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:30.648824   60948 kubeadm.go:322] 
	I1212 21:15:30.648920   60948 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649052   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:30.649101   60948 kubeadm.go:322]     --control-plane 	  
	I1212 21:15:30.649111   60948 kubeadm.go:322] 
	I1212 21:15:30.649205   60948 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:30.649214   60948 kubeadm.go:322] 
	I1212 21:15:30.649313   60948 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649435   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:30.649933   60948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:30.649961   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:15:30.649971   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:30.651531   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:30.652689   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:30.663574   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:30.686618   60948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:30.686690   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.686692   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=old-k8s-version-372099 minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.707974   60948 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:30.909886   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.037212   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.641453   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:28.819896   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.562965   60628 pod_ready.go:81] duration metric: took 4m0.000097626s waiting for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:29.563010   60628 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:29.563041   60628 pod_ready.go:38] duration metric: took 4m10.604144973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:29.563066   60628 kubeadm.go:640] restartCluster took 4m31.813522594s
	W1212 21:15:29.563127   60628 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:29.563156   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:32.141066   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:32.640787   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.140569   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.640785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.140535   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.641063   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.140492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.640819   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.140748   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.640647   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.141492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.641109   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.140524   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.641401   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.141549   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.641304   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.141537   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.641149   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.141391   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.640949   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.000355   60628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.437170953s)
	I1212 21:15:44.000430   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:44.014718   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:44.025263   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:44.035086   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:44.035133   60628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 21:15:44.089390   60628 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 21:15:44.089499   60628 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:44.275319   60628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:44.275496   60628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:44.275594   60628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:44.529521   60628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:42.141256   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:42.640563   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.140785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.640773   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.141155   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.641415   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.140534   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.641492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.141203   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.259301   60948 kubeadm.go:1088] duration metric: took 15.572687129s to wait for elevateKubeSystemPrivileges.
	I1212 21:15:46.259339   60948 kubeadm.go:406] StartCluster complete in 5m46.398198596s
	I1212 21:15:46.259364   60948 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.259455   60948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:15:46.261128   60948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.261410   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:15:46.261582   60948 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:15:46.261654   60948 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261676   60948 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-372099"
	W1212 21:15:46.261691   60948 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:15:46.261690   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:15:46.261729   60948 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261739   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.261745   60948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-372099"
	I1212 21:15:46.262128   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262150   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262176   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262204   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262371   60948 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-372099"
	I1212 21:15:46.262388   60948 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-372099"
	W1212 21:15:46.262396   60948 addons.go:240] addon metrics-server should already be in state true
	I1212 21:15:46.262431   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.262755   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262775   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.280829   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 21:15:46.281025   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1212 21:15:46.281167   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I1212 21:15:46.281451   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.282027   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282043   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282307   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282340   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282381   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282455   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282466   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.282760   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282816   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.283348   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283365   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283377   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.283388   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.286570   60948 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-372099"
	W1212 21:15:46.286591   60948 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:15:46.286618   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.287021   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.287041   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.300740   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1212 21:15:46.301674   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.301993   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I1212 21:15:46.302303   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.302317   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.302667   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.302772   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.302940   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.303112   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.303127   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.303537   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.304537   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.306285   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.308411   60948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:15:46.307398   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1212 21:15:46.307432   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.310694   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:15:46.310717   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:15:46.310737   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.311358   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.312839   60948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:15:44.530987   60628 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:44.531136   60628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:44.531267   60628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:44.531359   60628 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:44.531879   60628 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:44.532386   60628 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:44.533944   60628 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:44.535037   60628 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:44.536175   60628 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:44.537226   60628 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:44.537964   60628 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:44.538451   60628 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:44.538551   60628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:44.841462   60628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:45.059424   60628 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:15:45.613097   60628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:46.221274   60628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:46.372266   60628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:46.373199   60628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:46.376094   60628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:46.311872   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.314010   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.314158   60948 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.314170   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:15:46.314187   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.314387   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.314450   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.314958   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.314985   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.315221   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.315264   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.315563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.315745   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.315925   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.316191   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.322472   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324106   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.324142   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324390   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.324651   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.324861   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.325008   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.339982   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I1212 21:15:46.340365   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.340889   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.340915   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.341242   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.341434   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.343069   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.343366   60948 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.343384   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:15:46.343402   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.346212   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346596   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.346626   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346882   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.347322   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.347482   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.347618   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	W1212 21:15:46.380698   60948 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-372099" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:15:46.380724   60948 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1212 21:15:46.380745   60948 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:15:46.383175   60948 out.go:177] * Verifying Kubernetes components...
	I1212 21:15:46.384789   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:46.518292   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:15:46.518316   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:15:46.519393   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.554663   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.580810   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:15:46.580839   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:15:46.614409   60948 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.614501   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:15:46.628267   60948 node_ready.go:49] node "old-k8s-version-372099" has status "Ready":"True"
	I1212 21:15:46.628302   60948 node_ready.go:38] duration metric: took 13.858882ms waiting for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.628318   60948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:46.651927   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:46.651957   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:15:46.655191   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:46.734455   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:47.462832   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462859   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.462837   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462930   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465016   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465028   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465047   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465057   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465066   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465018   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465027   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465126   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465143   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465155   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465440   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465459   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465460   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465477   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465462   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465509   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.509931   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.509955   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.510242   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.510268   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.510289   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.529296   60948 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 21:15:47.740624   60948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006125978s)
	I1212 21:15:47.740686   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.740704   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.741066   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741082   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741104   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.741117   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741344   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741370   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741380   60948 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-372099"
	I1212 21:15:47.741382   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.743094   60948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:15:46.377620   60628 out.go:204]   - Booting up control plane ...
	I1212 21:15:46.377753   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:46.380316   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:46.381669   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:46.400406   60628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:46.401911   60628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:46.402016   60628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 21:15:46.577916   60628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:47.744911   60948 addons.go:502] enable addons completed in 1.483323446s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:15:48.879924   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:51.240011   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.081961   60628 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503798 seconds
	I1212 21:15:55.108753   60628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:55.132442   60628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:55.675426   60628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:55.675616   60628 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-343495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:15:56.197198   60628 kubeadm.go:322] [bootstrap-token] Using token: 6e6rca.dj99vsq9tzjoif3m
	I1212 21:15:56.198596   60628 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:56.198756   60628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:56.204758   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:15:56.217506   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:56.221482   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:56.225791   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:56.231024   60628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:56.249696   60628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:15:56.516070   60628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:56.613203   60628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:56.613227   60628 kubeadm.go:322] 
	I1212 21:15:56.613315   60628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:56.613340   60628 kubeadm.go:322] 
	I1212 21:15:56.613432   60628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:56.613447   60628 kubeadm.go:322] 
	I1212 21:15:56.613501   60628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:56.613588   60628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:56.613671   60628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:56.613682   60628 kubeadm.go:322] 
	I1212 21:15:56.613755   60628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 21:15:56.613762   60628 kubeadm.go:322] 
	I1212 21:15:56.613822   60628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:15:56.613832   60628 kubeadm.go:322] 
	I1212 21:15:56.613903   60628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:56.614004   60628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:56.614104   60628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:56.614116   60628 kubeadm.go:322] 
	I1212 21:15:56.614244   60628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:15:56.614369   60628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:56.614388   60628 kubeadm.go:322] 
	I1212 21:15:56.614507   60628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614653   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:56.614682   60628 kubeadm.go:322] 	--control-plane 
	I1212 21:15:56.614689   60628 kubeadm.go:322] 
	I1212 21:15:56.614787   60628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:56.614797   60628 kubeadm.go:322] 
	I1212 21:15:56.614865   60628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614993   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:56.616155   60628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:56.616184   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:15:56.616197   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:56.618787   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:53.240376   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.738865   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:56.620193   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:56.653642   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:56.701431   60628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:56.701520   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.701521   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=no-preload-343495 minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.765645   60628 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:57.021925   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.162627   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.772366   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.239852   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.239881   60948 pod_ready.go:81] duration metric: took 10.584655594s waiting for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.239895   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245919   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.245943   60948 pod_ready.go:81] duration metric: took 6.039649ms waiting for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245955   60948 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251905   60948 pod_ready.go:92] pod "kube-proxy-vzqkz" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.251933   60948 pod_ready.go:81] duration metric: took 5.969732ms waiting for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251943   60948 pod_ready.go:38] duration metric: took 10.623613273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:57.251963   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:15:57.252021   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:15:57.271808   60948 api_server.go:72] duration metric: took 10.891018678s to wait for apiserver process to appear ...
	I1212 21:15:57.271834   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:15:57.271853   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:15:57.279544   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:15:57.280373   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:15:57.280393   60948 api_server.go:131] duration metric: took 8.55283ms to wait for apiserver health ...
	I1212 21:15:57.280401   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:57.284489   60948 system_pods.go:59] 5 kube-system pods found
	I1212 21:15:57.284516   60948 system_pods.go:61] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.284520   60948 system_pods.go:61] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.284525   60948 system_pods.go:61] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.284531   60948 system_pods.go:61] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.284535   60948 system_pods.go:61] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.284542   60948 system_pods.go:74] duration metric: took 4.136571ms to wait for pod list to return data ...
	I1212 21:15:57.284549   60948 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:57.288616   60948 default_sa.go:45] found service account: "default"
	I1212 21:15:57.288643   60948 default_sa.go:55] duration metric: took 4.087698ms for default service account to be created ...
	I1212 21:15:57.288653   60948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:57.292785   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.292807   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.292812   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.292816   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.292822   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.292827   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.292842   60948 retry.go:31] will retry after 207.544988ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.505885   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.505911   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.505917   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.505921   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.505928   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.505932   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.505949   60948 retry.go:31] will retry after 367.076908ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.878466   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.878501   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.878509   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.878514   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.878520   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.878527   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.878547   60948 retry.go:31] will retry after 381.308829ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.264211   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.264237   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.264243   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.264247   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.264256   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.264262   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.264290   60948 retry.go:31] will retry after 366.461937ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.638206   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.638229   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.638234   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.638238   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.638245   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.638249   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.638276   60948 retry.go:31] will retry after 512.413163ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.156233   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.156263   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.156268   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.156272   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.156279   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.156284   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.156301   60948 retry.go:31] will retry after 775.973999ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.937928   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.937958   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.937966   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.937973   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.937983   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.937990   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.938009   60948 retry.go:31] will retry after 831.74396ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:00.775403   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:00.775427   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:00.775432   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:00.775436   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:00.775442   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:00.775447   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:00.775461   60948 retry.go:31] will retry after 1.069326929s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:01.849879   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:01.849906   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:01.849911   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:01.849915   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:01.849922   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:01.849927   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:01.849944   60948 retry.go:31] will retry after 1.540430535s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.271568   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:58.772443   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.271781   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.771732   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.272235   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.771891   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.271870   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.772445   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.271997   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.772496   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.395395   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:03.395421   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:03.395427   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:03.395431   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:03.395437   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:03.395442   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:03.395458   60948 retry.go:31] will retry after 2.25868002s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:05.661953   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:05.661988   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:05.661997   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:05.662005   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:05.662016   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:05.662026   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:05.662047   60948 retry.go:31] will retry after 2.893719866s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:03.272067   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.771992   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.272187   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.772518   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.272480   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.772460   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.272463   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.772291   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.271662   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.772063   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.272491   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.414409   60628 kubeadm.go:1088] duration metric: took 11.712956328s to wait for elevateKubeSystemPrivileges.
	I1212 21:16:08.414452   60628 kubeadm.go:406] StartCluster complete in 5m10.714058162s
	I1212 21:16:08.414480   60628 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.414582   60628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:16:08.417772   60628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.418132   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:16:08.418167   60628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:16:08.418267   60628 addons.go:69] Setting storage-provisioner=true in profile "no-preload-343495"
	I1212 21:16:08.418281   60628 addons.go:69] Setting default-storageclass=true in profile "no-preload-343495"
	I1212 21:16:08.418289   60628 addons.go:231] Setting addon storage-provisioner=true in "no-preload-343495"
	W1212 21:16:08.418297   60628 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:16:08.418301   60628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-343495"
	I1212 21:16:08.418310   60628 addons.go:69] Setting metrics-server=true in profile "no-preload-343495"
	I1212 21:16:08.418344   60628 addons.go:231] Setting addon metrics-server=true in "no-preload-343495"
	I1212 21:16:08.418349   60628 host.go:66] Checking if "no-preload-343495" exists ...
	W1212 21:16:08.418353   60628 addons.go:240] addon metrics-server should already be in state true
	I1212 21:16:08.418367   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:16:08.418401   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418810   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418850   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.437816   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I1212 21:16:08.438320   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.438921   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.438945   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.439225   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I1212 21:16:08.439418   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.439740   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.439809   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1212 21:16:08.440064   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.440092   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.440471   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.440491   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.440499   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.440887   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.440978   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.441002   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.441399   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.441442   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.441724   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.441960   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.446221   60628 addons.go:231] Setting addon default-storageclass=true in "no-preload-343495"
	W1212 21:16:08.446247   60628 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:16:08.446276   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.446655   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.446690   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.456479   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1212 21:16:08.456883   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.457330   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.457343   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.457784   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.457958   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.459741   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.461624   60628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:16:08.462951   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:16:08.462963   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:16:08.462978   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.462595   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1212 21:16:08.463831   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.464424   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.464443   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.465295   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.465627   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.467919   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468652   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.468681   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468905   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.469083   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.469197   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.469296   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.472614   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.474536   60628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:16:08.475957   60628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.475976   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:16:08.475995   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.476821   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I1212 21:16:08.477241   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.477772   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.477796   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.478322   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.479408   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.479457   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.479725   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480262   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.480285   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480565   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.480760   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.480909   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.481087   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.496182   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1212 21:16:08.496703   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.497250   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.497275   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.497705   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.497959   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.499696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.500049   60628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.500071   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:16:08.500098   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.503216   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503689   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.503717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503979   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.504187   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.504348   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.504521   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.519292   60628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-343495" context rescaled to 1 replicas
	I1212 21:16:08.519324   60628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:16:08.521243   60628 out.go:177] * Verifying Kubernetes components...
	I1212 21:16:08.522602   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:08.637693   60628 node_ready.go:35] waiting up to 6m0s for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.638072   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:16:08.640594   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:16:08.640620   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:16:08.645008   60628 node_ready.go:49] node "no-preload-343495" has status "Ready":"True"
	I1212 21:16:08.645041   60628 node_ready.go:38] duration metric: took 7.313798ms waiting for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.645056   60628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.650650   60628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658528   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.658556   60628 pod_ready.go:81] duration metric: took 7.881265ms waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658569   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682938   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.682962   60628 pod_ready.go:81] duration metric: took 24.384424ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682975   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.683220   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.688105   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:16:08.688131   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:16:08.695007   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.695034   60628 pod_ready.go:81] duration metric: took 12.050101ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.695046   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701206   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.701230   60628 pod_ready.go:81] duration metric: took 6.174333ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701240   60628 pod_ready.go:38] duration metric: took 56.165354ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.701262   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:16:08.701321   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:16:08.744650   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.758415   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:08.758444   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:16:08.841030   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:09.387385   60628 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 21:16:10.224475   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541186317s)
	I1212 21:16:10.224515   60628 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.523170366s)
	I1212 21:16:10.224548   60628 api_server.go:72] duration metric: took 1.705201863s to wait for apiserver process to appear ...
	I1212 21:16:10.224561   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:16:10.224571   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.479890747s)
	I1212 21:16:10.224606   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224579   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:16:10.224621   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.224522   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224686   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225001   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225050   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225065   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225074   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225011   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225019   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225020   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225115   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225130   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225140   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225347   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225358   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225507   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225572   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225600   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.233359   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:16:10.237567   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:16:10.237593   60628 api_server.go:131] duration metric: took 13.024501ms to wait for apiserver health ...
	I1212 21:16:10.237602   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:16:10.268851   60628 system_pods.go:59] 9 kube-system pods found
	I1212 21:16:10.268891   60628 system_pods.go:61] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268903   60628 system_pods.go:61] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268912   60628 system_pods.go:61] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.268920   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.268927   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.268936   60628 system_pods.go:61] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.268943   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.268953   60628 system_pods.go:61] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.268963   60628 system_pods.go:61] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending
	I1212 21:16:10.268971   60628 system_pods.go:74] duration metric: took 31.361836ms to wait for pod list to return data ...
	I1212 21:16:10.268987   60628 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:16:10.270947   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.270971   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.271270   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.271290   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.280134   60628 default_sa.go:45] found service account: "default"
	I1212 21:16:10.280159   60628 default_sa.go:55] duration metric: took 11.163534ms for default service account to be created ...
	I1212 21:16:10.280169   60628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:16:10.314822   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.314864   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314873   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314879   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.314886   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.314893   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.314903   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.314912   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.314923   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.314937   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.314957   60628 retry.go:31] will retry after 284.074155ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.328798   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.487713481s)
	I1212 21:16:10.328851   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.328866   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329251   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329276   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329276   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.329291   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.329304   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329540   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329556   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329566   60628 addons.go:467] Verifying addon metrics-server=true in "no-preload-343495"
	I1212 21:16:10.332474   60628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:16:08.563361   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:08.563393   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:08.563401   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:08.563408   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:08.563420   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:08.563427   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:08.563449   60948 retry.go:31] will retry after 2.871673075s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:11.441932   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:11.441970   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:11.441977   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:11.441983   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:11.441993   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.442003   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:11.442022   60948 retry.go:31] will retry after 3.977150615s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:10.333924   60628 addons.go:502] enable addons completed in 1.915760025s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:16:10.616684   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.616724   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616739   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616748   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.616757   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.616764   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.616775   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.616785   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.616795   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.616807   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.616825   60628 retry.go:31] will retry after 291.662068ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.919064   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.919104   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919114   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919125   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.919135   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.919142   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.919152   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.919160   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.919211   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.919229   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.919259   60628 retry.go:31] will retry after 381.992278ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.312083   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.312115   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.312121   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.312128   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.312137   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.312146   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.312152   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.312162   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.312170   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.312189   60628 retry.go:31] will retry after 495.705235ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.820167   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.820200   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.820205   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.820212   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.820217   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.820222   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.820226   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.820232   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.820237   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.820254   60628 retry.go:31] will retry after 635.810888ms: missing components: kube-dns, kube-proxy
	I1212 21:16:12.464096   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:12.464139   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Running
	I1212 21:16:12.464145   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:12.464149   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:12.464154   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:12.464158   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Running
	I1212 21:16:12.464162   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:12.464168   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:12.464176   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Running
	I1212 21:16:12.464185   60628 system_pods.go:126] duration metric: took 2.184010512s to wait for k8s-apps to be running ...
	I1212 21:16:12.464192   60628 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:12.464272   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:12.480090   60628 system_svc.go:56] duration metric: took 15.887114ms WaitForService to wait for kubelet.
	I1212 21:16:12.480124   60628 kubeadm.go:581] duration metric: took 3.960778694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:12.480163   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:12.483564   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:12.483589   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:12.483601   60628 node_conditions.go:105] duration metric: took 3.433071ms to run NodePressure ...
	I1212 21:16:12.483612   60628 start.go:228] waiting for startup goroutines ...
	I1212 21:16:12.483617   60628 start.go:233] waiting for cluster config update ...
	I1212 21:16:12.483626   60628 start.go:242] writing updated cluster config ...
	I1212 21:16:12.483887   60628 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:12.534680   60628 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 21:16:12.536622   60628 out.go:177] * Done! kubectl is now configured to use "no-preload-343495" cluster and "default" namespace by default
	I1212 21:16:15.424662   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:15.424691   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:15.424697   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:15.424701   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:15.424707   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:15.424712   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:15.424728   60948 retry.go:31] will retry after 4.920488737s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:20.351078   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:20.351107   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:20.351112   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:20.351116   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:20.351122   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:20.351127   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:20.351143   60948 retry.go:31] will retry after 5.718245097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:26.077073   60948 system_pods.go:86] 6 kube-system pods found
	I1212 21:16:26.077097   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:26.077103   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:26.077107   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Pending
	I1212 21:16:26.077111   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:26.077117   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:26.077122   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:26.077139   60948 retry.go:31] will retry after 8.251519223s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:34.334757   60948 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:34.334782   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:34.334787   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:34.334791   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:34.334796   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Pending
	I1212 21:16:34.334799   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:34.334804   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:34.334811   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:34.334815   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:34.334830   60948 retry.go:31] will retry after 8.584990669s: missing components: kube-apiserver, kube-scheduler
	I1212 21:16:42.927591   60948 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:42.927618   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:42.927624   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:42.927628   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:42.927632   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Running
	I1212 21:16:42.927637   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:42.927642   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:42.927647   60948 system_pods.go:89] "kube-scheduler-old-k8s-version-372099" [0e3e4e58-289f-47f1-999b-8fd87b90558a] Running
	I1212 21:16:42.927653   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:42.927658   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:42.927667   60948 system_pods.go:126] duration metric: took 45.639007967s to wait for k8s-apps to be running ...
	I1212 21:16:42.927673   60948 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:42.927715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:42.948680   60948 system_svc.go:56] duration metric: took 20.9943ms WaitForService to wait for kubelet.
	I1212 21:16:42.948711   60948 kubeadm.go:581] duration metric: took 56.56793182s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:42.948735   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:42.952462   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:42.952493   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:42.952505   60948 node_conditions.go:105] duration metric: took 3.763543ms to run NodePressure ...
	I1212 21:16:42.952518   60948 start.go:228] waiting for startup goroutines ...
	I1212 21:16:42.952527   60948 start.go:233] waiting for cluster config update ...
	I1212 21:16:42.952541   60948 start.go:242] writing updated cluster config ...
	I1212 21:16:42.952847   60948 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:43.001964   60948 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 21:16:43.003962   60948 out.go:177] 
	W1212 21:16:43.005327   60948 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 21:16:43.006827   60948 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 21:16:43.008259   60948 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-372099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:10:03 UTC, ends at Tue 2023-12-12 21:24:09 UTC. --
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.207981543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7873e76-8ebe-431a-b0e9-99a1d3100932 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.208152706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7873e76-8ebe-431a-b0e9-99a1d3100932 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.251103365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a23ddc1b-d4ac-47f6-a0db-cf225d10fc43 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.251188286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a23ddc1b-d4ac-47f6-a0db-cf225d10fc43 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.252244273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=20add453-e62a-4abb-82d1-97f817fe9043 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.253007389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416249252990204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=20add453-e62a-4abb-82d1-97f817fe9043 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.253606303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=931f4758-1d4c-45fa-bde8-80ec47cca2d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.253759608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=931f4758-1d4c-45fa-bde8-80ec47cca2d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.253978370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=931f4758-1d4c-45fa-bde8-80ec47cca2d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.296974671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3fb3b2f0-72ae-4b09-9e77-a8ee22f73877 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.297056418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3fb3b2f0-72ae-4b09-9e77-a8ee22f73877 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.298241862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=968ae6b4-391e-4010-b89d-0309f8d010ce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.298814648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416249298792473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=968ae6b4-391e-4010-b89d-0309f8d010ce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.299545323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99441a83-1460-4d60-9eba-9c8f6cdf3926 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.299620163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99441a83-1460-4d60-9eba-9c8f6cdf3926 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.299904263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99441a83-1460-4d60-9eba-9c8f6cdf3926 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.336304676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6b8915aa-7291-48cc-a339-f2483de9ccb0 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.336364671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6b8915aa-7291-48cc-a339-f2483de9ccb0 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.338370479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1b755f77-487e-45f9-ba32-93096622fafd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.338917027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416249338897254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1b755f77-487e-45f9-ba32-93096622fafd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.339558561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ca8292ce-d9c0-4ad0-8595-2c10b2a38306 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.339641423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ca8292ce-d9c0-4ad0-8595-2c10b2a38306 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.339890968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ca8292ce-d9c0-4ad0-8595-2c10b2a38306 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.364155552Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=842a4cd0-0e70-4c30-84bf-929cd9f93cd3 name=/runtime.v1.RuntimeService/Status
	Dec 12 21:24:09 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:24:09.364269045Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=842a4cd0-0e70-4c30-84bf-929cd9f93cd3 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea6928f21cd25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   972104fa23ba0       storage-provisioner
	c893e872464b5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   481966ba028dd       busybox
	d5ecf165d7cb6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   0d8da62cfda85       coredns-5dd5756b68-b5jrg
	5c1bc3f3622da       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   b518f95b229fe       kube-proxy-47qmb
	ca0e02bbed658       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   972104fa23ba0       storage-provisioner
	45c49920e4072       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   2cd11974b193c       etcd-default-k8s-diff-port-171828
	27b89c10d83be       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   435b602d77216       kube-apiserver-default-k8s-diff-port-171828
	cd9a395f80d15       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   830461dcb4c5b       kube-scheduler-default-k8s-diff-port-171828
	b4c8c82cfc4cf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   da2ac77f29ee8       kube-controller-manager-default-k8s-diff-port-171828
	
	
	==> coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35738 - 7522 "HINFO IN 478030668955208960.6356851381917873108. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008753741s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-171828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-171828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=default-k8s-diff-port-171828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_02_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:02:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-171828
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 21:24:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:21:23 +0000   Tue, 12 Dec 2023 21:02:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:21:23 +0000   Tue, 12 Dec 2023 21:02:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:21:23 +0000   Tue, 12 Dec 2023 21:02:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:21:23 +0000   Tue, 12 Dec 2023 21:10:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.253
	  Hostname:    default-k8s-diff-port-171828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e54c995e9bd4393816bbe98760d69c0
	  System UUID:                9e54c995-e9bd-4393-816b-be98760d69c0
	  Boot ID:                    462fdaf8-d418-495c-9331-be8ebcbdc08f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-b5jrg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-171828                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-171828             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-171828    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-47qmb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-171828             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-fqrqh                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-171828 event: Registered Node default-k8s-diff-port-171828 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-171828 event: Registered Node default-k8s-diff-port-171828 in Controller
	
	
	==> dmesg <==
	[Dec12 21:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.086663] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.754683] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec12 21:10] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155599] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.564849] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.137705] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.122665] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.167439] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.133838] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.239599] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +18.406432] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +14.169933] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.016652] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] <==
	{"level":"info","ts":"2023-12-12T21:10:37.702232Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T21:10:37.702347Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.253:2380"}
	{"level":"info","ts":"2023-12-12T21:10:37.702386Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.253:2380"}
	{"level":"info","ts":"2023-12-12T21:10:38.110515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"582eb53f9d006d6d is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T21:10:38.110761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"582eb53f9d006d6d became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T21:10:38.110779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"582eb53f9d006d6d received MsgPreVoteResp from 582eb53f9d006d6d at term 2"}
	{"level":"info","ts":"2023-12-12T21:10:38.110791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"582eb53f9d006d6d became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T21:10:38.110796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"582eb53f9d006d6d received MsgVoteResp from 582eb53f9d006d6d at term 3"}
	{"level":"info","ts":"2023-12-12T21:10:38.110805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"582eb53f9d006d6d became leader at term 3"}
	{"level":"info","ts":"2023-12-12T21:10:38.110812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 582eb53f9d006d6d elected leader 582eb53f9d006d6d at term 3"}
	{"level":"info","ts":"2023-12-12T21:10:38.113219Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"582eb53f9d006d6d","local-member-attributes":"{Name:default-k8s-diff-port-171828 ClientURLs:[https://192.168.72.253:2379]}","request-path":"/0/members/582eb53f9d006d6d/attributes","cluster-id":"c6b9032463a87dac","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T21:10:38.113388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T21:10:38.118325Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T21:10:38.11939Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T21:10:38.1299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T21:10:38.130049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T21:10:38.131778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.253:2379"}
	{"level":"info","ts":"2023-12-12T21:10:43.750131Z","caller":"traceutil/trace.go:171","msg":"trace[752656045] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"119.894759ms","start":"2023-12-12T21:10:43.630223Z","end":"2023-12-12T21:10:43.750118Z","steps":["trace[752656045] 'process raft request'  (duration: 113.948756ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:10:47.35267Z","caller":"traceutil/trace.go:171","msg":"trace[1737857950] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"163.727339ms","start":"2023-12-12T21:10:47.188923Z","end":"2023-12-12T21:10:47.35265Z","steps":["trace[1737857950] 'process raft request'  (duration: 163.432345ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:10:47.357793Z","caller":"traceutil/trace.go:171","msg":"trace[1995843512] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"165.63019ms","start":"2023-12-12T21:10:47.192152Z","end":"2023-12-12T21:10:47.357783Z","steps":["trace[1995843512] 'process raft request'  (duration: 165.467607ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:10:50.095015Z","caller":"traceutil/trace.go:171","msg":"trace[1658801008] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"116.333945ms","start":"2023-12-12T21:10:49.978661Z","end":"2023-12-12T21:10:50.094995Z","steps":["trace[1658801008] 'process raft request'  (duration: 116.203852ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:18:09.206299Z","caller":"traceutil/trace.go:171","msg":"trace[207985459] transaction","detail":"{read_only:false; response_revision:1014; number_of_response:1; }","duration":"172.578251ms","start":"2023-12-12T21:18:09.033677Z","end":"2023-12-12T21:18:09.206255Z","steps":["trace[207985459] 'process raft request'  (duration: 171.794054ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:20:38.162587Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":890}
	{"level":"info","ts":"2023-12-12T21:20:38.165797Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":890,"took":"2.808074ms","hash":3286688731}
	{"level":"info","ts":"2023-12-12T21:20:38.16587Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3286688731,"revision":890,"compact-revision":-1}
	
	
	==> kernel <==
	 21:24:09 up 14 min,  0 users,  load average: 0.18, 0.24, 0.20
	Linux default-k8s-diff-port-171828 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] <==
	I1212 21:20:39.801536       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:20:40.802049       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:20:40.802251       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:20:40.802361       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:20:40.802175       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:20:40.802540       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:20:40.803879       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:21:39.697846       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:21:40.802900       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:21:40.802959       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:21:40.802968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:21:40.804251       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:21:40.804362       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:21:40.804370       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:22:39.697982       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 21:23:39.697539       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:23:40.803397       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:23:40.803439       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:23:40.803445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:23:40.804846       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:23:40.805031       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:23:40.805066       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] <==
	I1212 21:18:23.547231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:18:53.055963       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:18:53.555013       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:19:23.063927       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:19:23.563841       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:19:53.071681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:19:53.580433       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:20:23.078094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:20:23.589323       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:20:53.084159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:20:53.598958       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:21:23.091006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:21:23.611245       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:21:48.099375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="420.369µs"
	E1212 21:21:53.100081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:21:53.621171       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:22:03.092278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="224.136µs"
	E1212 21:22:23.106650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:22:23.632457       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:22:53.115291       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:22:53.641935       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:23:23.121447       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:23:23.651784       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:23:53.128932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:23:53.667820       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] <==
	I1212 21:10:41.533568       1 server_others.go:69] "Using iptables proxy"
	I1212 21:10:41.570051       1 node.go:141] Successfully retrieved node IP: 192.168.72.253
	I1212 21:10:41.641796       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 21:10:41.641873       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 21:10:41.645683       1 server_others.go:152] "Using iptables Proxier"
	I1212 21:10:41.645840       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 21:10:41.646053       1 server.go:846] "Version info" version="v1.28.4"
	I1212 21:10:41.646092       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:10:41.646875       1 config.go:188] "Starting service config controller"
	I1212 21:10:41.646929       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 21:10:41.646976       1 config.go:97] "Starting endpoint slice config controller"
	I1212 21:10:41.646992       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 21:10:41.648503       1 config.go:315] "Starting node config controller"
	I1212 21:10:41.648557       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 21:10:41.747860       1 shared_informer.go:318] Caches are synced for service config
	I1212 21:10:41.748026       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 21:10:41.749429       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] <==
	I1212 21:10:37.745885       1 serving.go:348] Generated self-signed cert in-memory
	W1212 21:10:39.766679       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 21:10:39.766859       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 21:10:39.766898       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 21:10:39.766922       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 21:10:39.791293       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 21:10:39.791382       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:10:39.799900       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 21:10:39.800195       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:10:39.800243       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 21:10:39.800276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 21:10:39.900785       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:10:03 UTC, ends at Tue 2023-12-12 21:24:09 UTC. --
	Dec 12 21:21:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:21:34.090863     931 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-crmzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-fqrqh_kube-system(633d3468-a8df-4c9b-9bab-8c26ce998832): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:21:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:21:34.091006     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:21:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:21:34.092647     931 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:21:34 default-k8s-diff-port-171828 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:21:34 default-k8s-diff-port-171828 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:21:34 default-k8s-diff-port-171828 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:21:48 default-k8s-diff-port-171828 kubelet[931]: E1212 21:21:48.073537     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:22:03 default-k8s-diff-port-171828 kubelet[931]: E1212 21:22:03.072598     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:22:18 default-k8s-diff-port-171828 kubelet[931]: E1212 21:22:18.072076     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:22:33 default-k8s-diff-port-171828 kubelet[931]: E1212 21:22:33.071918     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:22:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:22:34.095625     931 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:22:34 default-k8s-diff-port-171828 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:22:34 default-k8s-diff-port-171828 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:22:34 default-k8s-diff-port-171828 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:22:45 default-k8s-diff-port-171828 kubelet[931]: E1212 21:22:45.072088     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:22:59 default-k8s-diff-port-171828 kubelet[931]: E1212 21:22:59.071859     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:23:13 default-k8s-diff-port-171828 kubelet[931]: E1212 21:23:13.072056     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:23:27 default-k8s-diff-port-171828 kubelet[931]: E1212 21:23:27.072306     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:23:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:23:34.088838     931 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:23:34 default-k8s-diff-port-171828 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:23:34 default-k8s-diff-port-171828 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:23:34 default-k8s-diff-port-171828 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:23:39 default-k8s-diff-port-171828 kubelet[931]: E1212 21:23:39.072378     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:23:52 default-k8s-diff-port-171828 kubelet[931]: E1212 21:23:52.072498     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:24:04 default-k8s-diff-port-171828 kubelet[931]: E1212 21:24:04.073133     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	
	
	==> storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] <==
	I1212 21:10:41.287669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 21:11:11.294001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] <==
	I1212 21:11:12.477851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:11:12.489217       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:11:12.489353       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:11:29.898386       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:11:29.901217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-171828_ab619959-6c2b-45d8-8e13-36bb7dad0675!
	I1212 21:11:29.902351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8ac4db0-8089-47ee-a188-aec6180ea709", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-171828_ab619959-6c2b-45d8-8e13-36bb7dad0675 became leader
	I1212 21:11:30.001795       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-171828_ab619959-6c2b-45d8-8e13-36bb7dad0675!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fqrqh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 describe pod metrics-server-57f55c9bc5-fqrqh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-171828 describe pod metrics-server-57f55c9bc5-fqrqh: exit status 1 (69.91206ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fqrqh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-171828 describe pod metrics-server-57f55c9bc5-fqrqh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-343495 -n no-preload-343495
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:25:13.13182199 +0000 UTC m=+5314.351994560
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-343495 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-343495 logs -n 25: (1.618428879s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-690675 sudo cat                              | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo find                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo crio                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-690675                                       | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:06:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:06:02.112042   61298 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:06:02.112158   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112166   61298 out.go:309] Setting ErrFile to fd 2...
	I1212 21:06:02.112171   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112352   61298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:06:02.112888   61298 out.go:303] Setting JSON to false
	I1212 21:06:02.113799   61298 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6516,"bootTime":1702408646,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:06:02.113858   61298 start.go:138] virtualization: kvm guest
	I1212 21:06:02.116152   61298 out.go:177] * [default-k8s-diff-port-171828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:06:02.118325   61298 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:06:02.118373   61298 notify.go:220] Checking for updates...
	I1212 21:06:02.120036   61298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:06:02.121697   61298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:06:02.123350   61298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:06:02.124958   61298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:06:02.126355   61298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:06:02.128221   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:06:02.128652   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.128709   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.143368   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I1212 21:06:02.143740   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.144319   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.144342   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.144674   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.144877   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.145143   61298 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:06:02.145473   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.145519   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.160165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1212 21:06:02.160611   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.161098   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.161129   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.161410   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.161605   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.198703   61298 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:06:02.199992   61298 start.go:298] selected driver: kvm2
	I1212 21:06:02.200011   61298 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.200131   61298 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:06:02.200848   61298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.200920   61298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:06:02.215947   61298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:06:02.216333   61298 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:02.216397   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:06:02.216410   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:06:02.216420   61298 start_flags.go:323] config:
	{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-17182
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.216597   61298 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.218773   61298 out.go:177] * Starting control plane node default-k8s-diff-port-171828 in cluster default-k8s-diff-port-171828
	I1212 21:05:59.427580   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:02.220182   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:06:02.220241   61298 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 21:06:02.220256   61298 cache.go:56] Caching tarball of preloaded images
	I1212 21:06:02.220379   61298 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:06:02.220393   61298 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 21:06:02.220514   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:06:02.220739   61298 start.go:365] acquiring machines lock for default-k8s-diff-port-171828: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:06:05.507538   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:08.579605   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:14.659535   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:17.731542   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:23.811575   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:26.883541   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:32.963600   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:36.035521   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:42.115475   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:45.187562   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:51.267528   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:54.339532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:00.419548   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:03.491553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:09.571514   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:12.643531   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:18.723534   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:21.795549   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:27.875554   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:30.947574   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:37.027523   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:40.099490   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:46.179518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:49.251577   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:55.331532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:58.403520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:04.483547   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:07.555546   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:13.635553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:16.707518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:22.787551   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:25.859539   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:31.939511   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:35.011564   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:41.091518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:44.163443   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:50.243526   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:53.315520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:59.395550   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:02.467533   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:05.471384   60833 start.go:369] acquired machines lock for "embed-certs-831188" in 4m18.011296189s
	I1212 21:09:05.471446   60833 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:05.471453   60833 fix.go:54] fixHost starting: 
	I1212 21:09:05.471803   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:05.471837   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:05.486451   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1212 21:09:05.486900   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:05.487381   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:05.487404   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:05.487715   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:05.487879   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:05.488020   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:05.489670   60833 fix.go:102] recreateIfNeeded on embed-certs-831188: state=Stopped err=<nil>
	I1212 21:09:05.489704   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	W1212 21:09:05.489876   60833 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:05.492059   60833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831188" ...
	I1212 21:09:05.493752   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Start
	I1212 21:09:05.493959   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring networks are active...
	I1212 21:09:05.494984   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network default is active
	I1212 21:09:05.495423   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network mk-embed-certs-831188 is active
	I1212 21:09:05.495761   60833 main.go:141] libmachine: (embed-certs-831188) Getting domain xml...
	I1212 21:09:05.496421   60833 main.go:141] libmachine: (embed-certs-831188) Creating domain...
	I1212 21:09:06.732388   60833 main.go:141] libmachine: (embed-certs-831188) Waiting to get IP...
	I1212 21:09:06.733338   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:06.733708   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:06.733785   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:06.733676   61768 retry.go:31] will retry after 284.906493ms: waiting for machine to come up
	I1212 21:09:07.020284   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.020718   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.020745   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.020671   61768 retry.go:31] will retry after 293.274895ms: waiting for machine to come up
	I1212 21:09:07.315313   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.315686   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.315712   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.315641   61768 retry.go:31] will retry after 361.328832ms: waiting for machine to come up
	I1212 21:09:05.469256   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:05.469293   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:09:05.471233   60628 machine.go:91] provisioned docker machine in 4m37.408714984s
	I1212 21:09:05.471294   60628 fix.go:56] fixHost completed within 4m37.431179626s
	I1212 21:09:05.471299   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 4m37.431203273s
	W1212 21:09:05.471318   60628 start.go:694] error starting host: provision: host is not running
	W1212 21:09:05.471416   60628 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 21:09:05.471424   60628 start.go:709] Will try again in 5 seconds ...
	I1212 21:09:07.678255   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.678636   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.678700   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.678599   61768 retry.go:31] will retry after 604.479659ms: waiting for machine to come up
	I1212 21:09:08.284350   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:08.284754   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:08.284779   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:08.284701   61768 retry.go:31] will retry after 731.323448ms: waiting for machine to come up
	I1212 21:09:09.017564   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.018007   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.018040   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.017968   61768 retry.go:31] will retry after 734.083609ms: waiting for machine to come up
	I1212 21:09:09.753947   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.754423   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.754446   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.754362   61768 retry.go:31] will retry after 786.816799ms: waiting for machine to come up
	I1212 21:09:10.542771   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:10.543304   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:10.543341   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:10.543264   61768 retry.go:31] will retry after 1.40646031s: waiting for machine to come up
	I1212 21:09:11.951821   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:11.952180   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:11.952223   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:11.952135   61768 retry.go:31] will retry after 1.693488962s: waiting for machine to come up
	I1212 21:09:10.473087   60628 start.go:365] acquiring machines lock for no-preload-343495: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:09:13.646801   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:13.647256   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:13.647299   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:13.647180   61768 retry.go:31] will retry after 1.856056162s: waiting for machine to come up
	I1212 21:09:15.504815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:15.505228   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:15.505258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:15.505175   61768 retry.go:31] will retry after 2.008264333s: waiting for machine to come up
	I1212 21:09:17.516231   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:17.516653   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:17.516683   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:17.516604   61768 retry.go:31] will retry after 3.239343078s: waiting for machine to come up
	I1212 21:09:20.757258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:20.757696   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:20.757725   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:20.757654   61768 retry.go:31] will retry after 4.315081016s: waiting for machine to come up
	I1212 21:09:26.424166   60948 start.go:369] acquired machines lock for "old-k8s-version-372099" in 4m29.049387398s
	I1212 21:09:26.424241   60948 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:26.424254   60948 fix.go:54] fixHost starting: 
	I1212 21:09:26.424715   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:26.424763   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:26.444634   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I1212 21:09:26.445043   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:26.445520   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:09:26.445538   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:26.445863   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:26.446052   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:26.446192   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:09:26.447776   60948 fix.go:102] recreateIfNeeded on old-k8s-version-372099: state=Stopped err=<nil>
	I1212 21:09:26.447804   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	W1212 21:09:26.448015   60948 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:26.450126   60948 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-372099" ...
	I1212 21:09:26.451553   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Start
	I1212 21:09:26.451708   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring networks are active...
	I1212 21:09:26.452388   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network default is active
	I1212 21:09:26.452655   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network mk-old-k8s-version-372099 is active
	I1212 21:09:26.453124   60948 main.go:141] libmachine: (old-k8s-version-372099) Getting domain xml...
	I1212 21:09:26.453799   60948 main.go:141] libmachine: (old-k8s-version-372099) Creating domain...
	I1212 21:09:25.078112   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078553   60833 main.go:141] libmachine: (embed-certs-831188) Found IP for machine: 192.168.50.163
	I1212 21:09:25.078585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has current primary IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078596   60833 main.go:141] libmachine: (embed-certs-831188) Reserving static IP address...
	I1212 21:09:25.078997   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.079030   60833 main.go:141] libmachine: (embed-certs-831188) Reserved static IP address: 192.168.50.163
	I1212 21:09:25.079052   60833 main.go:141] libmachine: (embed-certs-831188) DBG | skip adding static IP to network mk-embed-certs-831188 - found existing host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"}
	I1212 21:09:25.079071   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Getting to WaitForSSH function...
	I1212 21:09:25.079085   60833 main.go:141] libmachine: (embed-certs-831188) Waiting for SSH to be available...
	I1212 21:09:25.080901   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081194   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.081242   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081366   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH client type: external
	I1212 21:09:25.081388   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa (-rw-------)
	I1212 21:09:25.081416   60833 main.go:141] libmachine: (embed-certs-831188) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:25.081426   60833 main.go:141] libmachine: (embed-certs-831188) DBG | About to run SSH command:
	I1212 21:09:25.081438   60833 main.go:141] libmachine: (embed-certs-831188) DBG | exit 0
	I1212 21:09:25.171277   60833 main.go:141] libmachine: (embed-certs-831188) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:25.171663   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetConfigRaw
	I1212 21:09:25.172345   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.174944   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175302   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.175333   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175553   60833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/config.json ...
	I1212 21:09:25.175828   60833 machine.go:88] provisioning docker machine ...
	I1212 21:09:25.175855   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:25.176065   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176212   60833 buildroot.go:166] provisioning hostname "embed-certs-831188"
	I1212 21:09:25.176233   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176371   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.178556   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178823   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.178850   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178957   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.179142   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179295   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179436   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.179558   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.179895   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.179910   60833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831188 && echo "embed-certs-831188" | sudo tee /etc/hostname
	I1212 21:09:25.312418   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831188
	
	I1212 21:09:25.312457   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.315156   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315529   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.315570   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315707   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.315895   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316053   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316211   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.316378   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.316840   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.316869   60833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831188' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831188/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831188' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:25.448302   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:25.448332   60833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:25.448353   60833 buildroot.go:174] setting up certificates
	I1212 21:09:25.448362   60833 provision.go:83] configureAuth start
	I1212 21:09:25.448369   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.448691   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.451262   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.451639   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451807   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.454144   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454434   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.454460   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454596   60833 provision.go:138] copyHostCerts
	I1212 21:09:25.454665   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:25.454689   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:25.454775   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:25.454928   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:25.454940   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:25.454984   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:25.455062   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:25.455073   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:25.455106   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:25.455171   60833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831188 san=[192.168.50.163 192.168.50.163 localhost 127.0.0.1 minikube embed-certs-831188]
	I1212 21:09:25.678855   60833 provision.go:172] copyRemoteCerts
	I1212 21:09:25.678942   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:25.678975   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.681866   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682221   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.682249   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682399   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.682590   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.682730   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.682856   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:25.773454   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:25.796334   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 21:09:25.818680   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:09:25.840234   60833 provision.go:86] duration metric: configureAuth took 391.845214ms
	I1212 21:09:25.840268   60833 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:25.840497   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:25.840643   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.842988   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843431   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.843482   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843586   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.843772   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.843946   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.844066   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.844227   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.844542   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.844563   60833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:26.167363   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:26.167388   60833 machine.go:91] provisioned docker machine in 991.541719ms
	I1212 21:09:26.167398   60833 start.go:300] post-start starting for "embed-certs-831188" (driver="kvm2")
	I1212 21:09:26.167408   60833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:26.167444   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.167739   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:26.167763   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.170188   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170569   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.170611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170712   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.170880   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.171049   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.171194   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.261249   60833 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:26.265429   60833 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:26.265451   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:26.265522   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:26.265602   60833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:26.265695   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:26.274054   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:26.297890   60833 start.go:303] post-start completed in 130.478946ms
	I1212 21:09:26.297915   60833 fix.go:56] fixHost completed within 20.826462284s
	I1212 21:09:26.297934   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.300585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.300934   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.300975   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.301144   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.301359   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301529   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301665   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.301797   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:26.302153   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:26.302164   60833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:26.423978   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415366.370228005
	
	I1212 21:09:26.424008   60833 fix.go:206] guest clock: 1702415366.370228005
	I1212 21:09:26.424019   60833 fix.go:219] Guest: 2023-12-12 21:09:26.370228005 +0000 UTC Remote: 2023-12-12 21:09:26.297918475 +0000 UTC m=+278.991313322 (delta=72.30953ms)
	I1212 21:09:26.424052   60833 fix.go:190] guest clock delta is within tolerance: 72.30953ms
	I1212 21:09:26.424061   60833 start.go:83] releasing machines lock for "embed-certs-831188", held for 20.952636536s
	I1212 21:09:26.424090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.424347   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:26.427068   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427479   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.427519   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427592   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428173   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428344   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428414   60833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:26.428470   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.428492   60833 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:26.428508   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.430943   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431251   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431371   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431393   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431548   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431631   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431654   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431776   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.431844   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431998   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.432040   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432285   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.432490   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.548980   60833 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:26.555211   60833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:26.707171   60833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:26.714564   60833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:26.714658   60833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:26.730858   60833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:26.730890   60833 start.go:475] detecting cgroup driver to use...
	I1212 21:09:26.730963   60833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:26.751316   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:26.766700   60833 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:26.766767   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:26.783157   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:26.799559   60833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:26.908659   60833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:27.029185   60833 docker.go:219] disabling docker service ...
	I1212 21:09:27.029245   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:27.042969   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:27.055477   60833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:27.174297   60833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:27.285338   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:27.299676   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:27.317832   60833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:09:27.317900   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.329270   60833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:27.329346   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.341201   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.353243   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.365796   60833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:27.377700   60833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:27.388796   60833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:27.388858   60833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:27.401983   60833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:27.411527   60833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:27.523326   60833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:27.702370   60833 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:27.702435   60833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:27.707537   60833 start.go:543] Will wait 60s for crictl version
	I1212 21:09:27.707619   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:09:27.711502   60833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:27.750808   60833 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:27.750912   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.799419   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.848900   60833 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:09:27.722142   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting to get IP...
	I1212 21:09:27.723300   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.723736   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.723806   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.723702   61894 retry.go:31] will retry after 267.755874ms: waiting for machine to come up
	I1212 21:09:27.993406   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.993917   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.993947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.993865   61894 retry.go:31] will retry after 314.872831ms: waiting for machine to come up
	I1212 21:09:28.310446   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.311022   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.311051   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.310971   61894 retry.go:31] will retry after 435.368111ms: waiting for machine to come up
	I1212 21:09:28.747774   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.748267   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.748299   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.748238   61894 retry.go:31] will retry after 521.305154ms: waiting for machine to come up
	I1212 21:09:29.270989   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.271519   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.271553   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.271446   61894 retry.go:31] will retry after 482.42376ms: waiting for machine to come up
	I1212 21:09:29.755222   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.755724   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.755755   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.755671   61894 retry.go:31] will retry after 676.918794ms: waiting for machine to come up
	I1212 21:09:30.434488   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:30.435072   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:30.435103   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:30.435025   61894 retry.go:31] will retry after 876.618903ms: waiting for machine to come up
	I1212 21:09:31.313270   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:31.313826   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:31.313857   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:31.313775   61894 retry.go:31] will retry after 1.03353638s: waiting for machine to come up
	I1212 21:09:27.850614   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:27.853633   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854033   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:27.854069   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854243   60833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:27.858626   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:27.871999   60833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:09:27.872058   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:27.920758   60833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:09:27.920832   60833 ssh_runner.go:195] Run: which lz4
	I1212 21:09:27.924857   60833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:27.929186   60833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:27.929220   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:09:29.834194   60833 crio.go:444] Took 1.909381 seconds to copy over tarball
	I1212 21:09:29.834285   60833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:32.348562   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:32.349019   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:32.349041   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:32.348978   61894 retry.go:31] will retry after 1.80085882s: waiting for machine to come up
	I1212 21:09:34.151943   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:34.152375   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:34.152416   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:34.152343   61894 retry.go:31] will retry after 2.08304575s: waiting for machine to come up
	I1212 21:09:36.238682   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:36.239115   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:36.239149   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:36.239074   61894 retry.go:31] will retry after 2.109809124s: waiting for machine to come up
	I1212 21:09:33.005355   60833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.171034001s)
	I1212 21:09:33.005386   60833 crio.go:451] Took 3.171167 seconds to extract the tarball
	I1212 21:09:33.005398   60833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:33.046773   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:33.101606   60833 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:09:33.101627   60833 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:09:33.101689   60833 ssh_runner.go:195] Run: crio config
	I1212 21:09:33.162553   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:33.162584   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:33.162608   60833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:33.162637   60833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831188 NodeName:embed-certs-831188 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:09:33.162806   60833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831188"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:33.162923   60833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-831188 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:33.162978   60833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:09:33.171937   60833 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:33.172013   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:33.180480   60833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 21:09:33.197675   60833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:33.214560   60833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 21:09:33.234926   60833 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:33.238913   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:33.255261   60833 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188 for IP: 192.168.50.163
	I1212 21:09:33.255320   60833 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:33.255462   60833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:33.255496   60833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:33.255561   60833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/client.key
	I1212 21:09:33.255641   60833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key.6a576ed8
	I1212 21:09:33.255686   60833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key
	I1212 21:09:33.255781   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:33.255807   60833 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:33.255814   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:33.255835   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:33.255864   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:33.255885   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:33.255931   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:33.256505   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:33.282336   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:33.307179   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:33.332468   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:09:33.357444   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:33.383372   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:33.409070   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:33.438164   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:33.467676   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:33.496645   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:33.523126   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:33.548366   60833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:33.567745   60833 ssh_runner.go:195] Run: openssl version
	I1212 21:09:33.573716   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:33.584221   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589689   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589767   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.595880   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:33.609574   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:33.623129   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629541   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629615   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.635862   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:33.646421   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:33.656686   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661397   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661473   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.667092   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:33.677905   60833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:33.682795   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:33.689346   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:33.695822   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:33.702368   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:33.708500   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:33.714793   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:33.721121   60833 kubeadm.go:404] StartCluster: {Name:embed-certs-831188 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:33.721252   60833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:33.721319   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:33.759428   60833 cri.go:89] found id: ""
	I1212 21:09:33.759502   60833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:33.769592   60833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:33.769617   60833 kubeadm.go:636] restartCluster start
	I1212 21:09:33.769712   60833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:33.779313   60833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.780838   60833 kubeconfig.go:92] found "embed-certs-831188" server: "https://192.168.50.163:8443"
	I1212 21:09:33.784096   60833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:33.793192   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.793314   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.805112   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.805139   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.805196   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.816975   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.317757   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.317858   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.329702   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.817167   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.817266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.828633   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.317136   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.317230   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.328803   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.818032   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.818121   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.829428   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.318141   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.318253   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.330749   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.817284   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.817367   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.828787   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:37.317183   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.317266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.334557   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.350131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:38.350522   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:38.350546   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:38.350484   61894 retry.go:31] will retry after 2.423656351s: waiting for machine to come up
	I1212 21:09:40.777036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:40.777455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:40.777489   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:40.777399   61894 retry.go:31] will retry after 3.275180742s: waiting for machine to come up
	I1212 21:09:37.817090   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.817219   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.833813   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.317328   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.317409   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.334684   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.817255   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.817353   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.831011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.317555   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.317648   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.330189   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.817759   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.817866   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.830611   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.317127   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.317198   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.329508   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.817580   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.817677   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.829289   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.317853   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.317928   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.331394   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.818013   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.818098   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.829011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:42.317526   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.317610   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.329211   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:44.056058   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:44.056558   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:44.056587   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:44.056517   61894 retry.go:31] will retry after 4.729711581s: waiting for machine to come up
	I1212 21:09:42.818081   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.818166   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.829930   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.317420   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:43.317526   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:43.328536   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.794084   60833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:09:43.794118   60833 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:09:43.794129   60833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:09:43.794192   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:43.842360   60833 cri.go:89] found id: ""
	I1212 21:09:43.842431   60833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:09:43.859189   60833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:09:43.869065   60833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:09:43.869135   60833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878614   60833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878644   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.011533   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.544591   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.757944   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.850440   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.942874   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:09:44.942967   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:44.954886   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.466556   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.966545   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.465991   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.966021   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.987348   60833 api_server.go:72] duration metric: took 2.04447632s to wait for apiserver process to appear ...
	I1212 21:09:46.987374   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:09:46.987388   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.987890   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:46.987926   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.988389   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:50.008527   61298 start.go:369] acquired machines lock for "default-k8s-diff-port-171828" in 3m47.787737833s
	I1212 21:09:50.008595   61298 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:50.008607   61298 fix.go:54] fixHost starting: 
	I1212 21:09:50.008999   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:50.009035   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:50.025692   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I1212 21:09:50.026047   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:50.026541   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:09:50.026563   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:50.026945   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:50.027160   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:09:50.027344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:09:50.029005   61298 fix.go:102] recreateIfNeeded on default-k8s-diff-port-171828: state=Stopped err=<nil>
	I1212 21:09:50.029031   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	W1212 21:09:50.029193   61298 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:50.031805   61298 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-171828" ...
	I1212 21:09:48.789770   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790158   60948 main.go:141] libmachine: (old-k8s-version-372099) Found IP for machine: 192.168.39.202
	I1212 21:09:48.790172   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserving static IP address...
	I1212 21:09:48.790195   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790655   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.790683   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserved static IP address: 192.168.39.202
	I1212 21:09:48.790701   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | skip adding static IP to network mk-old-k8s-version-372099 - found existing host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"}
	I1212 21:09:48.790719   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Getting to WaitForSSH function...
	I1212 21:09:48.790736   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting for SSH to be available...
	I1212 21:09:48.793069   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793392   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.793418   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793542   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH client type: external
	I1212 21:09:48.793582   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa (-rw-------)
	I1212 21:09:48.793610   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:48.793620   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | About to run SSH command:
	I1212 21:09:48.793629   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | exit 0
	I1212 21:09:48.883487   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:48.883885   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetConfigRaw
	I1212 21:09:48.884519   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:48.887128   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.887485   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887734   60948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/config.json ...
	I1212 21:09:48.887918   60948 machine.go:88] provisioning docker machine ...
	I1212 21:09:48.887936   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:48.888097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888225   60948 buildroot.go:166] provisioning hostname "old-k8s-version-372099"
	I1212 21:09:48.888238   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888378   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:48.890462   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890820   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.890847   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890982   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:48.891139   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891289   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:48.891597   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:48.891940   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:48.891955   60948 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-372099 && echo "old-k8s-version-372099" | sudo tee /etc/hostname
	I1212 21:09:49.012923   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-372099
	
	I1212 21:09:49.012954   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.015698   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016076   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.016117   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016245   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.016437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016583   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016710   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.016859   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.017308   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.017338   60948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-372099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-372099/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-372099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:49.144804   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:49.144842   60948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:49.144875   60948 buildroot.go:174] setting up certificates
	I1212 21:09:49.144885   60948 provision.go:83] configureAuth start
	I1212 21:09:49.144896   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:49.145181   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:49.147947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148294   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.148340   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148475   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.151218   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.151697   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.151760   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.152022   60948 provision.go:138] copyHostCerts
	I1212 21:09:49.152083   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:49.152102   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:49.152172   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:49.152299   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:49.152307   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:49.152335   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:49.152402   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:49.152407   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:49.152428   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:49.152485   60948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-372099 san=[192.168.39.202 192.168.39.202 localhost 127.0.0.1 minikube old-k8s-version-372099]
	I1212 21:09:49.298406   60948 provision.go:172] copyRemoteCerts
	I1212 21:09:49.298478   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:49.298508   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.301384   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301696   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.301729   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301948   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.302156   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.302320   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.302442   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.385046   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:09:49.409667   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:49.434002   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:09:49.458872   60948 provision.go:86] duration metric: configureAuth took 313.97378ms
	I1212 21:09:49.458907   60948 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:49.459075   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:09:49.459143   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.461794   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.462183   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462373   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.462574   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462730   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462857   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.463042   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.463594   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.463641   60948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:49.767652   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:49.767745   60948 machine.go:91] provisioned docker machine in 879.803204ms
	I1212 21:09:49.767772   60948 start.go:300] post-start starting for "old-k8s-version-372099" (driver="kvm2")
	I1212 21:09:49.767785   60948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:49.767812   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:49.768162   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:49.768191   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.770970   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771351   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.771388   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771595   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.771805   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.772009   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.772155   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.857053   60948 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:49.861510   60948 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:49.861535   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:49.861600   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:49.861672   60948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:49.861781   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:49.869967   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:49.892746   60948 start.go:303] post-start completed in 124.959403ms
	I1212 21:09:49.892768   60948 fix.go:56] fixHost completed within 23.468514721s
	I1212 21:09:49.892790   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.895273   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895618   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.895653   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895776   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.895951   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896269   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.896433   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.896887   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.896904   60948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:50.008384   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415389.953345991
	
	I1212 21:09:50.008407   60948 fix.go:206] guest clock: 1702415389.953345991
	I1212 21:09:50.008415   60948 fix.go:219] Guest: 2023-12-12 21:09:49.953345991 +0000 UTC Remote: 2023-12-12 21:09:49.89277138 +0000 UTC m=+292.853960893 (delta=60.574611ms)
	I1212 21:09:50.008441   60948 fix.go:190] guest clock delta is within tolerance: 60.574611ms
	I1212 21:09:50.008445   60948 start.go:83] releasing machines lock for "old-k8s-version-372099", held for 23.584233709s
	I1212 21:09:50.008469   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.008757   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:50.011577   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.011930   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.011958   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.012109   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012750   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012964   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.013059   60948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:50.013102   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.013195   60948 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:50.013222   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.016031   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016304   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016525   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016566   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016720   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.016815   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016855   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016883   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017008   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.017080   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017186   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017256   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.017357   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017520   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.125100   60948 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:50.132264   60948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:50.278965   60948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:50.286230   60948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:50.286308   60948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:50.301165   60948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:50.301192   60948 start.go:475] detecting cgroup driver to use...
	I1212 21:09:50.301256   60948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:50.318715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:50.331943   60948 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:50.332013   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:50.348872   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:50.366970   60948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:50.492936   60948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:50.620103   60948 docker.go:219] disabling docker service ...
	I1212 21:09:50.620185   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:50.632962   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:50.644797   60948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:50.759039   60948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:50.884352   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:50.896549   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:50.919987   60948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 21:09:50.920056   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.932147   60948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:50.932224   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.941195   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.951010   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.962752   60948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:50.975125   60948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:50.984906   60948 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:50.984971   60948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:50.999594   60948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:51.010344   60948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:51.114607   60948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:51.318020   60948 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:51.318108   60948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:51.325048   60948 start.go:543] Will wait 60s for crictl version
	I1212 21:09:51.325134   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:51.329905   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:51.377974   60948 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:51.378075   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.444024   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.512531   60948 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 21:09:51.514171   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:51.517083   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517636   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:51.517667   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517886   60948 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:51.522137   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:51.538124   60948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 21:09:51.538191   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:51.594603   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:51.594688   60948 ssh_runner.go:195] Run: which lz4
	I1212 21:09:51.599732   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:51.604811   60948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:51.604844   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 21:09:50.033553   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Start
	I1212 21:09:50.033768   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring networks are active...
	I1212 21:09:50.034638   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network default is active
	I1212 21:09:50.035192   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network mk-default-k8s-diff-port-171828 is active
	I1212 21:09:50.035630   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Getting domain xml...
	I1212 21:09:50.036380   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Creating domain...
	I1212 21:09:51.530274   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting to get IP...
	I1212 21:09:51.531329   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531766   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.531744   62039 retry.go:31] will retry after 271.90604ms: waiting for machine to come up
	I1212 21:09:51.805469   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806028   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806062   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.805967   62039 retry.go:31] will retry after 338.221769ms: waiting for machine to come up
	I1212 21:09:47.488610   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.543731   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.543786   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.543807   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.654915   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.654949   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.989408   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.996278   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:51.996337   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.488734   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.496289   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:52.496327   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.989065   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.997013   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:09:53.012736   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:09:53.012777   60833 api_server.go:131] duration metric: took 6.025395735s to wait for apiserver health ...
	I1212 21:09:53.012789   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:53.012806   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:53.014820   60833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:09:53.016797   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:09:53.047434   60833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:09:53.095811   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:09:53.115354   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:09:53.115441   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:09:53.115465   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:09:53.115504   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:09:53.115532   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:09:53.115551   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:09:53.115582   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:09:53.115607   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:09:53.115633   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:09:53.115643   60833 system_pods.go:74] duration metric: took 19.808922ms to wait for pod list to return data ...
	I1212 21:09:53.115655   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:09:53.127006   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:09:53.127044   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:09:53.127058   60833 node_conditions.go:105] duration metric: took 11.39604ms to run NodePressure ...
	I1212 21:09:53.127079   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:53.597509   60833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603447   60833 kubeadm.go:787] kubelet initialised
	I1212 21:09:53.603476   60833 kubeadm.go:788] duration metric: took 5.932359ms waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603486   60833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:53.616570   60833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.623514   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623547   60833 pod_ready.go:81] duration metric: took 6.940441ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.623560   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623570   60833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.631395   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631426   60833 pod_ready.go:81] duration metric: took 7.844548ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.631438   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631453   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.649647   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649681   60833 pod_ready.go:81] duration metric: took 18.215042ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.649693   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649702   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.662239   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662271   60833 pod_ready.go:81] duration metric: took 12.552977ms waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.662285   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662298   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.005841   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005879   60833 pod_ready.go:81] duration metric: took 343.569867ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.005892   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005908   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.403249   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403280   60833 pod_ready.go:81] duration metric: took 397.363687ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.403291   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403297   60833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.802330   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802367   60833 pod_ready.go:81] duration metric: took 399.057426ms waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.802380   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802390   60833 pod_ready.go:38] duration metric: took 1.198894195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:54.802413   60833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:09:54.822125   60833 ops.go:34] apiserver oom_adj: -16
	I1212 21:09:54.822154   60833 kubeadm.go:640] restartCluster took 21.052529291s
	I1212 21:09:54.822173   60833 kubeadm.go:406] StartCluster complete in 21.101061651s
	I1212 21:09:54.822194   60833 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.822273   60833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:09:54.825185   60833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.825490   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:09:54.825622   60833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:09:54.825714   60833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831188"
	I1212 21:09:54.825735   60833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-831188"
	W1212 21:09:54.825756   60833 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:09:54.825806   60833 addons.go:69] Setting metrics-server=true in profile "embed-certs-831188"
	I1212 21:09:54.825837   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.825849   60833 addons.go:231] Setting addon metrics-server=true in "embed-certs-831188"
	W1212 21:09:54.825863   60833 addons.go:240] addon metrics-server should already be in state true
	I1212 21:09:54.825969   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.826276   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826309   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826522   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826588   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826731   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:54.826767   60833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831188"
	I1212 21:09:54.826847   60833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831188"
	I1212 21:09:54.827349   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.827409   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.834506   60833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-831188" context rescaled to 1 replicas
	I1212 21:09:54.834614   60833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:09:54.837122   60833 out.go:177] * Verifying Kubernetes components...
	I1212 21:09:54.839094   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:09:54.846081   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I1212 21:09:54.846737   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847078   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I1212 21:09:54.847367   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.847387   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.847518   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847775   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848031   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.848053   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.848061   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.848355   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848912   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.848955   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.849635   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1212 21:09:54.849986   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.852255   60833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-831188"
	W1212 21:09:54.852279   60833 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:09:54.852306   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.852727   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.852758   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.853259   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.853289   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.853643   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.854187   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.854223   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.870249   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I1212 21:09:54.870805   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.871406   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.871430   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.871920   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.872090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.873692   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.876011   60833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:54.874681   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1212 21:09:54.877102   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1212 21:09:54.877666   60833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:54.877691   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:09:54.877710   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.877993   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878108   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878602   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878622   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.878738   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878754   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.879004   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.879362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.879426   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.880445   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.880486   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.881642   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.883715   60833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:09:54.885165   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:09:54.885184   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:09:54.885199   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.883021   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.883884   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.885257   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.885295   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.885442   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.885598   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.885727   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.893093   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893096   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.893152   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.893190   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.893534   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.893676   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.902833   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1212 21:09:54.903320   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.903867   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.903888   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.904337   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.904535   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.906183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.906443   60833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:54.906463   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:09:54.906484   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.909330   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.909914   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.909954   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.910136   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.910328   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.910492   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.910639   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:55.020642   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:55.123475   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:55.141398   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:09:55.141429   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:09:55.200799   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:09:55.200833   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:09:55.275142   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:55.275172   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:09:55.308985   60833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831188" to be "Ready" ...
	I1212 21:09:55.309133   60833 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:09:55.341251   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:56.829715   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706199185s)
	I1212 21:09:56.829768   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829780   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.829784   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.809111646s)
	I1212 21:09:56.829860   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829870   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830143   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.830166   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.830178   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.830188   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830267   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.831959   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832013   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.832048   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.831765   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831788   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831794   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.832139   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832236   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.833156   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.833196   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.843517   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.843542   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.843815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.843870   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.843880   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.023745   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.682445607s)
	I1212 21:09:57.023801   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.023815   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024252   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024263   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:57.024276   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024287   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.024303   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024676   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024691   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024706   60833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-831188"
	I1212 21:09:53.564404   60948 crio.go:444] Took 1.964711 seconds to copy over tarball
	I1212 21:09:53.564488   60948 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:57.052627   60948 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.488106402s)
	I1212 21:09:57.052657   60948 crio.go:451] Took 3.488218 seconds to extract the tarball
	I1212 21:09:57.052669   60948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:52.145724   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146453   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146484   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.146352   62039 retry.go:31] will retry after 482.98499ms: waiting for machine to come up
	I1212 21:09:52.630862   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631317   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631343   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.631232   62039 retry.go:31] will retry after 480.323704ms: waiting for machine to come up
	I1212 21:09:53.113661   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.114249   62039 retry.go:31] will retry after 649.543956ms: waiting for machine to come up
	I1212 21:09:53.765102   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765613   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765643   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.765558   62039 retry.go:31] will retry after 824.137815ms: waiting for machine to come up
	I1212 21:09:54.591782   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592356   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592391   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:54.592273   62039 retry.go:31] will retry after 874.563899ms: waiting for machine to come up
	I1212 21:09:55.468934   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469429   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469459   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:55.469393   62039 retry.go:31] will retry after 1.224276076s: waiting for machine to come up
	I1212 21:09:56.695111   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695604   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695637   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:56.695560   62039 retry.go:31] will retry after 1.207984075s: waiting for machine to come up
	I1212 21:09:57.157310   60833 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:09:57.322702   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:57.093318   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:57.723104   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:57.723132   60948 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:09:57.723259   60948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.723342   60948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.723442   60948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 21:09:57.723302   60948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724835   60948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 21:09:57.724864   60948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.724861   60948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.724836   60948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724853   60948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.724842   60948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.724847   60948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.724893   60948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.918047   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.920893   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.927072   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.928080   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.931259   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 21:09:57.932017   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.939580   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.990594   60948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 21:09:57.990667   60948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.990724   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.059759   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:58.095401   60948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 21:09:58.095451   60948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.095504   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138192   60948 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 21:09:58.138287   60948 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.138333   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138491   60948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 21:09:58.138532   60948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.138594   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145060   60948 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 21:09:58.145116   60948 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 21:09:58.145146   60948 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 21:09:58.145177   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145185   60948 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.145225   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145073   60948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 21:09:58.145250   60948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.145271   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145322   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:58.268621   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.268721   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.268774   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.268826   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 21:09:58.268863   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.268895   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.268956   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 21:09:58.408748   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 21:09:58.418795   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 21:09:58.418843   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 21:09:58.420451   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 21:09:58.420516   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 21:09:58.420577   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.420585   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 21:09:58.425621   60948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 21:09:58.425639   60948 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.425684   60948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 21:09:59.172682   60948 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 21:09:59.172736   60948 cache_images.go:92] LoadImages completed in 1.449590507s
	W1212 21:09:59.172819   60948 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 21:09:59.172900   60948 ssh_runner.go:195] Run: crio config
	I1212 21:09:59.238502   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:09:59.238522   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:59.238539   60948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:59.238560   60948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-372099 NodeName:old-k8s-version-372099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 21:09:59.238733   60948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-372099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-372099
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.202:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:59.238886   60948 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-372099 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:59.238953   60948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 21:09:59.249183   60948 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:59.249271   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:59.263171   60948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 21:09:59.281172   60948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:59.302622   60948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 21:09:59.323131   60948 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:59.327344   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:59.342182   60948 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099 for IP: 192.168.39.202
	I1212 21:09:59.342216   60948 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:59.342412   60948 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:59.342465   60948 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:59.342554   60948 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/client.key
	I1212 21:09:59.342659   60948 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key.9e66e972
	I1212 21:09:59.342723   60948 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key
	I1212 21:09:59.342854   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:59.342891   60948 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:59.342908   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:59.342947   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:59.342984   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:59.343024   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:59.343081   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:59.343948   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:59.375250   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:59.404892   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:59.434762   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:09:59.465696   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:59.496528   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:59.521739   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:59.545606   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:59.574153   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:59.599089   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:59.625217   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:59.654715   60948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:59.674946   60948 ssh_runner.go:195] Run: openssl version
	I1212 21:09:59.683295   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:59.697159   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702671   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702745   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.710931   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:59.723204   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:59.735713   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741621   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741715   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.748041   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:59.760217   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:59.772701   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778501   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778589   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.787066   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:59.803355   60948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:59.809920   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:59.819093   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:59.827918   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:59.836228   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:59.845437   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:59.852647   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:59.861170   60948 kubeadm.go:404] StartCluster: {Name:old-k8s-version-372099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:59.861285   60948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:59.861358   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:59.906807   60948 cri.go:89] found id: ""
	I1212 21:09:59.906885   60948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:59.919539   60948 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:59.919579   60948 kubeadm.go:636] restartCluster start
	I1212 21:09:59.919637   60948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:59.930547   60948 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.931845   60948 kubeconfig.go:92] found "old-k8s-version-372099" server: "https://192.168.39.202:8443"
	I1212 21:09:59.934471   60948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:59.945701   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.945780   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.959415   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.959438   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.959496   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.975677   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.476388   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.476469   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.493781   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.976367   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.976475   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.993084   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.476277   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.476362   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.490076   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.976393   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.976505   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.990771   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:57.905327   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905730   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:57.905649   62039 retry.go:31] will retry after 1.427858275s: waiting for machine to come up
	I1212 21:09:59.335284   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335735   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:59.335630   62039 retry.go:31] will retry after 1.773169552s: waiting for machine to come up
	I1212 21:10:01.110044   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110533   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:01.110468   62039 retry.go:31] will retry after 2.199207847s: waiting for machine to come up
	I1212 21:09:57.672094   60833 addons.go:502] enable addons completed in 2.846462968s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:09:59.822907   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:01.824673   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:02.325980   60833 node_ready.go:49] node "embed-certs-831188" has status "Ready":"True"
	I1212 21:10:02.326008   60833 node_ready.go:38] duration metric: took 7.016985612s waiting for node "embed-certs-831188" to be "Ready" ...
	I1212 21:10:02.326021   60833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:02.339547   60833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345609   60833 pod_ready.go:92] pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.345638   60833 pod_ready.go:81] duration metric: took 6.052243ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345652   60833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.476354   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.476429   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.489326   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:02.975846   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.975935   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.992975   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.476463   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.476577   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.489471   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.975762   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.975891   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.992773   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.476395   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.476510   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.489163   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.976403   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.976503   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.990508   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.475988   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.476108   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.489347   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.975811   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.975874   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.988996   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.475817   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.475896   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.487886   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.976376   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.976445   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.988627   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.312460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312859   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312892   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:03.312807   62039 retry.go:31] will retry after 4.329332977s: waiting for machine to come up
	I1212 21:10:02.864894   60833 pod_ready.go:92] pod "etcd-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.864921   60833 pod_ready.go:81] duration metric: took 519.26143ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.864935   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871360   60833 pod_ready.go:92] pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.871392   60833 pod_ready.go:81] duration metric: took 6.449389ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871406   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529203   60833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.529228   60833 pod_ready.go:81] duration metric: took 1.657813273s waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529243   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722607   60833 pod_ready.go:92] pod "kube-proxy-nsv4w" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.722631   60833 pod_ready.go:81] duration metric: took 193.381057ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722641   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124360   60833 pod_ready.go:92] pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:05.124388   60833 pod_ready.go:81] duration metric: took 401.739767ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124401   60833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:07.476521   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.476603   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.487362   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:07.976016   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.976101   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.987221   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.475793   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.475894   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.486641   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.976140   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.976262   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.987507   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.476080   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:09.476168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:09.487537   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.946342   60948 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:09.946377   60948 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:09.946412   60948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:09.946487   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:09.988850   60948 cri.go:89] found id: ""
	I1212 21:10:09.988939   60948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:10.004726   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:10.015722   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:10.015787   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025706   60948 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025743   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:10.156614   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.030056   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.219060   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.315587   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.398016   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:11.398110   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.411642   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.927297   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:07.644473   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644921   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644950   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:07.644868   62039 retry.go:31] will retry after 5.180616294s: waiting for machine to come up
	I1212 21:10:07.428366   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:09.929940   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.157275   60628 start.go:369] acquired machines lock for "no-preload-343495" in 1m3.684137096s
	I1212 21:10:14.157330   60628 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:10:14.157342   60628 fix.go:54] fixHost starting: 
	I1212 21:10:14.157767   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:14.157812   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:14.175936   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 21:10:14.176421   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:14.176957   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:10:14.176982   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:14.177380   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:14.177601   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:14.177804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:10:14.179672   60628 fix.go:102] recreateIfNeeded on no-preload-343495: state=Stopped err=<nil>
	I1212 21:10:14.179696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	W1212 21:10:14.179911   60628 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:10:14.183064   60628 out.go:177] * Restarting existing kvm2 VM for "no-preload-343495" ...
	I1212 21:10:12.828825   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.829471   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Found IP for machine: 192.168.72.253
	I1212 21:10:12.829501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserving static IP address...
	I1212 21:10:12.829530   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has current primary IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.830061   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.830110   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | skip adding static IP to network mk-default-k8s-diff-port-171828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"}
	I1212 21:10:12.830133   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserved static IP address: 192.168.72.253
	I1212 21:10:12.830152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Getting to WaitForSSH function...
	I1212 21:10:12.830163   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for SSH to be available...
	I1212 21:10:12.832654   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833033   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.833065   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833273   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH client type: external
	I1212 21:10:12.833302   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa (-rw-------)
	I1212 21:10:12.833335   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:12.833352   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | About to run SSH command:
	I1212 21:10:12.833370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | exit 0
	I1212 21:10:12.931871   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:12.932439   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetConfigRaw
	I1212 21:10:12.933250   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:12.936555   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937009   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.937051   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937341   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:10:12.937642   61298 machine.go:88] provisioning docker machine ...
	I1212 21:10:12.937669   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:12.937933   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938136   61298 buildroot.go:166] provisioning hostname "default-k8s-diff-port-171828"
	I1212 21:10:12.938161   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938373   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:12.941209   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941589   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.941620   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941796   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:12.941978   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942183   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:12.942539   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:12.942885   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:12.942904   61298 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-171828 && echo "default-k8s-diff-port-171828" | sudo tee /etc/hostname
	I1212 21:10:13.099123   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-171828
	
	I1212 21:10:13.099152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.102085   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.102496   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102756   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.102965   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103166   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.103580   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.104000   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.104034   61298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-171828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-171828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-171828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:13.246501   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:13.246535   61298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:13.246561   61298 buildroot.go:174] setting up certificates
	I1212 21:10:13.246577   61298 provision.go:83] configureAuth start
	I1212 21:10:13.246590   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:13.246875   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:13.249703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250010   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.250043   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250196   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.252501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.252814   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.252852   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.253086   61298 provision.go:138] copyHostCerts
	I1212 21:10:13.253151   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:13.253171   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:13.253266   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:13.253399   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:13.253412   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:13.253437   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:13.253501   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:13.253508   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:13.253526   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:13.253586   61298 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-171828 san=[192.168.72.253 192.168.72.253 localhost 127.0.0.1 minikube default-k8s-diff-port-171828]
	I1212 21:10:13.331755   61298 provision.go:172] copyRemoteCerts
	I1212 21:10:13.331819   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:13.331841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.334412   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334741   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.334777   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334981   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.335185   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.335369   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.335498   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.429448   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:10:13.454350   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:13.479200   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 21:10:13.505120   61298 provision.go:86] duration metric: configureAuth took 258.53005ms
	I1212 21:10:13.505151   61298 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:13.505370   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:13.505451   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.508400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.508826   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.508858   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.509144   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.509360   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509524   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.509829   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.510161   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.510184   61298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:13.874783   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:13.874810   61298 machine.go:91] provisioned docker machine in 937.151566ms
	I1212 21:10:13.874822   61298 start.go:300] post-start starting for "default-k8s-diff-port-171828" (driver="kvm2")
	I1212 21:10:13.874835   61298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:13.874853   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:13.875182   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:13.875213   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.877937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.878400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878640   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.878819   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.878984   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.879148   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.978276   61298 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:13.984077   61298 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:13.984114   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:13.984229   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:13.984309   61298 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:13.984391   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:13.996801   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:14.021773   61298 start.go:303] post-start completed in 146.935628ms
	I1212 21:10:14.021796   61298 fix.go:56] fixHost completed within 24.013191129s
	I1212 21:10:14.021815   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.024847   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.025227   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.025599   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025788   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025951   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.026106   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:14.026436   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:14.026452   61298 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:14.157053   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415414.138141396
	
	I1212 21:10:14.157082   61298 fix.go:206] guest clock: 1702415414.138141396
	I1212 21:10:14.157092   61298 fix.go:219] Guest: 2023-12-12 21:10:14.138141396 +0000 UTC Remote: 2023-12-12 21:10:14.021800288 +0000 UTC m=+251.962428882 (delta=116.341108ms)
	I1212 21:10:14.157130   61298 fix.go:190] guest clock delta is within tolerance: 116.341108ms
	I1212 21:10:14.157141   61298 start.go:83] releasing machines lock for "default-k8s-diff-port-171828", held for 24.148576854s
	I1212 21:10:14.157193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.157567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:14.160748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161134   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.161172   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161489   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162089   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162259   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162333   61298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:14.162389   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.162627   61298 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:14.162652   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.165726   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.165941   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166485   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166548   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166598   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166636   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166649   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166905   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166907   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167104   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167153   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167231   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.167349   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167500   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.294350   61298 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:14.301705   61298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:14.459967   61298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:14.467979   61298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:14.468043   61298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:14.483883   61298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:14.483910   61298 start.go:475] detecting cgroup driver to use...
	I1212 21:10:14.483976   61298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:14.498105   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:14.511716   61298 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:14.511784   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:14.525795   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:14.539213   61298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:14.658453   61298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:14.786222   61298 docker.go:219] disabling docker service ...
	I1212 21:10:14.786296   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:14.801656   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:14.814821   61298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:14.950542   61298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:15.085306   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:15.098508   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:15.118634   61298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:15.118709   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.130579   61298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:15.130667   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.140672   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.150340   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.161966   61298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:15.173049   61298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:15.181620   61298 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:15.181703   61298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:15.195505   61298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:15.204076   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:15.327587   61298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:15.505003   61298 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:15.505078   61298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:15.512282   61298 start.go:543] Will wait 60s for crictl version
	I1212 21:10:15.512349   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:10:15.516564   61298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:15.556821   61298 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:15.556906   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.612743   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.665980   61298 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:10:12.426883   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.927168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.962834   60948 api_server.go:72] duration metric: took 1.56481721s to wait for apiserver process to appear ...
	I1212 21:10:12.962862   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:12.962890   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.963447   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:12.963489   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.964022   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:13.464393   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:15.667323   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:15.670368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.670769   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:15.670804   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.671037   61298 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:15.675575   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:15.688523   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:10:15.688602   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:15.739601   61298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:10:15.739718   61298 ssh_runner.go:195] Run: which lz4
	I1212 21:10:15.744272   61298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:10:15.749574   61298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:10:15.749612   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:10:12.428614   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.430542   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:16.442797   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.184429   60628 main.go:141] libmachine: (no-preload-343495) Calling .Start
	I1212 21:10:14.184692   60628 main.go:141] libmachine: (no-preload-343495) Ensuring networks are active...
	I1212 21:10:14.186580   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network default is active
	I1212 21:10:14.187398   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network mk-no-preload-343495 is active
	I1212 21:10:14.188587   60628 main.go:141] libmachine: (no-preload-343495) Getting domain xml...
	I1212 21:10:14.189457   60628 main.go:141] libmachine: (no-preload-343495) Creating domain...
	I1212 21:10:15.509306   60628 main.go:141] libmachine: (no-preload-343495) Waiting to get IP...
	I1212 21:10:15.510320   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.510728   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.510772   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.510702   62255 retry.go:31] will retry after 275.567053ms: waiting for machine to come up
	I1212 21:10:15.788793   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.789233   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.789262   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.789193   62255 retry.go:31] will retry after 341.343409ms: waiting for machine to come up
	I1212 21:10:16.131936   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.132427   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.132452   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.132377   62255 retry.go:31] will retry after 302.905542ms: waiting for machine to come up
	I1212 21:10:16.437184   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.437944   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.437968   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.437850   62255 retry.go:31] will retry after 407.178114ms: waiting for machine to come up
	I1212 21:10:16.846738   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.847393   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.847429   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.847349   62255 retry.go:31] will retry after 507.703222ms: waiting for machine to come up
	I1212 21:10:17.357373   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:17.357975   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:17.358005   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:17.357907   62255 retry.go:31] will retry after 920.403188ms: waiting for machine to come up
	I1212 21:10:18.464726   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 21:10:18.464781   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.736922   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.736969   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.736990   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.816132   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.816165   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.964508   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.012996   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.013048   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.464538   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.509558   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.509601   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.965183   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:21.369579   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:10:21.381334   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:10:21.381365   60948 api_server.go:131] duration metric: took 8.418495294s to wait for apiserver health ...
	I1212 21:10:21.381378   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.381385   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.501371   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:21.801933   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:21.827010   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:21.853900   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:17.641827   61298 crio.go:444] Took 1.897583 seconds to copy over tarball
	I1212 21:10:17.641919   61298 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:10:21.283045   61298 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.641094924s)
	I1212 21:10:21.283076   61298 crio.go:451] Took 3.641222 seconds to extract the tarball
	I1212 21:10:21.283088   61298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:10:21.328123   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:21.387894   61298 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:10:21.387923   61298 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:10:21.387996   61298 ssh_runner.go:195] Run: crio config
	I1212 21:10:21.467191   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.467216   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.467255   61298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:21.467278   61298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-171828 NodeName:default-k8s-diff-port-171828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:21.467443   61298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-171828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:21.467537   61298 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-171828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 21:10:21.467596   61298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:10:21.478940   61298 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:21.479024   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:21.492604   61298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 21:10:21.514260   61298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:10:21.535059   61298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 21:10:21.557074   61298 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:21.562765   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:21.578989   61298 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828 for IP: 192.168.72.253
	I1212 21:10:21.579047   61298 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:21.579282   61298 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:21.579383   61298 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:21.579495   61298 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/client.key
	I1212 21:10:21.768212   61298 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key.a1600f99
	I1212 21:10:21.768305   61298 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key
	I1212 21:10:21.768447   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:21.768489   61298 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:21.768504   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:21.768542   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:21.768596   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:21.768625   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:21.768680   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:21.769557   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:21.800794   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:10:21.833001   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:21.864028   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:21.893107   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:21.918580   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:21.944095   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:21.970251   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:21.998947   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:22.027620   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:22.056851   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:22.084321   61298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:22.103273   61298 ssh_runner.go:195] Run: openssl version
	I1212 21:10:22.109518   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:18.932477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:21.431431   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:18.280164   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:18.280656   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:18.280687   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:18.280612   62255 retry.go:31] will retry after 761.825655ms: waiting for machine to come up
	I1212 21:10:19.043686   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:19.044170   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:19.044203   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:19.044117   62255 retry.go:31] will retry after 1.173408436s: waiting for machine to come up
	I1212 21:10:20.218938   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:20.219457   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:20.219488   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:20.219412   62255 retry.go:31] will retry after 1.484817124s: waiting for machine to come up
	I1212 21:10:21.706027   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:21.706505   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:21.706536   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:21.706467   62255 retry.go:31] will retry after 2.260831172s: waiting for machine to come up
	I1212 21:10:22.159195   60948 system_pods.go:59] 7 kube-system pods found
	I1212 21:10:22.284903   60948 system_pods.go:61] "coredns-5644d7b6d9-slvnx" [0db32241-69df-48dc-a60f-6921f9c5746f] Running
	I1212 21:10:22.284916   60948 system_pods.go:61] "etcd-old-k8s-version-372099" [72d219cb-b393-423d-ba62-b880bd2d26a0] Running
	I1212 21:10:22.284924   60948 system_pods.go:61] "kube-apiserver-old-k8s-version-372099" [c4f09d2d-07d2-4403-886b-37cb1471e7e5] Running
	I1212 21:10:22.284932   60948 system_pods.go:61] "kube-controller-manager-old-k8s-version-372099" [4a17c60c-2c72-4296-a7e4-0ae05e7bfa39] Running
	I1212 21:10:22.284939   60948 system_pods.go:61] "kube-proxy-5mvzb" [ec7c6540-35e2-4ae4-8592-d797132a8328] Running
	I1212 21:10:22.284945   60948 system_pods.go:61] "kube-scheduler-old-k8s-version-372099" [472284a4-9340-4bbc-8a1f-b9b55f4b0c3c] Running
	I1212 21:10:22.284952   60948 system_pods.go:61] "storage-provisioner" [b9fcec5f-bd1f-4c47-95cd-a9c8e3011e50] Running
	I1212 21:10:22.284961   60948 system_pods.go:74] duration metric: took 431.035724ms to wait for pod list to return data ...
	I1212 21:10:22.284990   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:22.592700   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:22.592734   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:22.592748   60948 node_conditions.go:105] duration metric: took 307.751463ms to run NodePressure ...
	I1212 21:10:22.592770   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:23.483331   60948 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:23.500661   60948 retry.go:31] will retry after 162.846257ms: kubelet not initialised
	I1212 21:10:23.669569   60948 retry.go:31] will retry after 257.344573ms: kubelet not initialised
	I1212 21:10:23.942373   60948 retry.go:31] will retry after 538.191385ms: kubelet not initialised
	I1212 21:10:24.487436   60948 retry.go:31] will retry after 635.824669ms: kubelet not initialised
	I1212 21:10:25.129226   60948 retry.go:31] will retry after 946.117517ms: kubelet not initialised
	I1212 21:10:26.082106   60948 retry.go:31] will retry after 2.374588936s: kubelet not initialised
	I1212 21:10:22.121093   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291519   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291585   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.297989   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:22.309847   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:22.321817   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326715   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326766   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.333001   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:22.345044   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:22.357827   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362795   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362858   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.368864   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:22.380605   61298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:22.385986   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:22.392931   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:22.399683   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:22.407203   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:22.414730   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:22.421808   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:22.430050   61298 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:22.430205   61298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:22.430263   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:22.482907   61298 cri.go:89] found id: ""
	I1212 21:10:22.482981   61298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:22.495001   61298 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:22.495032   61298 kubeadm.go:636] restartCluster start
	I1212 21:10:22.495104   61298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:22.506418   61298 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.508078   61298 kubeconfig.go:92] found "default-k8s-diff-port-171828" server: "https://192.168.72.253:8444"
	I1212 21:10:22.511809   61298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:22.523641   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.523703   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.536887   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.536913   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.536965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.549418   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.050111   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.050218   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.063845   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.550201   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.550303   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.567468   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.050021   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.050193   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.064792   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.550119   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.550213   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.568169   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.049891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.049997   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.063341   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.549592   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.549682   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.564096   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.049596   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.049701   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.063482   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.549680   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.549793   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.563956   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.049482   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.049614   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.062881   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.440487   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:25.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:23.969715   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:23.970242   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:23.970272   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:23.970200   62255 retry.go:31] will retry after 1.769886418s: waiting for machine to come up
	I1212 21:10:25.741628   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:25.742060   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:25.742098   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:25.742014   62255 retry.go:31] will retry after 2.283589137s: waiting for machine to come up
	I1212 21:10:28.462838   60948 retry.go:31] will retry after 1.809333362s: kubelet not initialised
	I1212 21:10:30.278747   60948 retry.go:31] will retry after 4.059791455s: kubelet not initialised
	I1212 21:10:27.550084   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.550176   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.564365   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.049688   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.049771   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.065367   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.549922   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.550009   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.566964   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.049535   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.049643   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.062264   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.549891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.549970   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.563687   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.050397   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.050492   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.065602   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.550210   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.550298   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.562793   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.050281   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.050374   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.064836   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.550407   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.550527   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.563474   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:32.049593   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:32.049689   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:32.062459   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.935166   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:30.429274   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:28.028345   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:28.028796   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:28.028824   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:28.028757   62255 retry.go:31] will retry after 4.021160394s: waiting for machine to come up
	I1212 21:10:32.052992   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:32.053479   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:32.053506   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:32.053442   62255 retry.go:31] will retry after 4.864494505s: waiting for machine to come up
	I1212 21:10:34.344571   60948 retry.go:31] will retry after 9.338953291s: kubelet not initialised
	I1212 21:10:32.524460   61298 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:32.524492   61298 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:32.524523   61298 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:32.524586   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:32.565596   61298 cri.go:89] found id: ""
	I1212 21:10:32.565685   61298 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:32.582458   61298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:32.592539   61298 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:32.592615   61298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603658   61298 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603683   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:32.730418   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.535390   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.742601   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.839081   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.909128   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:33.909209   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:33.928197   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.452146   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.952473   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.452270   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.952431   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.451626   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.482100   61298 api_server.go:72] duration metric: took 2.572973799s to wait for apiserver process to appear ...
	I1212 21:10:36.482125   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:36.482154   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.482833   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.482869   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.483345   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.984105   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:32.433032   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:34.928686   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.930503   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.920697   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921201   60628 main.go:141] libmachine: (no-preload-343495) Found IP for machine: 192.168.61.176
	I1212 21:10:36.921235   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has current primary IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921248   60628 main.go:141] libmachine: (no-preload-343495) Reserving static IP address...
	I1212 21:10:36.921719   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.921757   60628 main.go:141] libmachine: (no-preload-343495) DBG | skip adding static IP to network mk-no-preload-343495 - found existing host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"}
	I1212 21:10:36.921770   60628 main.go:141] libmachine: (no-preload-343495) Reserved static IP address: 192.168.61.176
	I1212 21:10:36.921785   60628 main.go:141] libmachine: (no-preload-343495) Waiting for SSH to be available...
	I1212 21:10:36.921802   60628 main.go:141] libmachine: (no-preload-343495) DBG | Getting to WaitForSSH function...
	I1212 21:10:36.924581   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.924908   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.924941   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.925154   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH client type: external
	I1212 21:10:36.925191   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa (-rw-------)
	I1212 21:10:36.925223   60628 main.go:141] libmachine: (no-preload-343495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:36.925234   60628 main.go:141] libmachine: (no-preload-343495) DBG | About to run SSH command:
	I1212 21:10:36.925246   60628 main.go:141] libmachine: (no-preload-343495) DBG | exit 0
	I1212 21:10:37.059619   60628 main.go:141] libmachine: (no-preload-343495) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:37.060017   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetConfigRaw
	I1212 21:10:37.060752   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.063599   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064325   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.064365   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064468   60628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/config.json ...
	I1212 21:10:37.064705   60628 machine.go:88] provisioning docker machine ...
	I1212 21:10:37.064733   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:37.064938   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065115   60628 buildroot.go:166] provisioning hostname "no-preload-343495"
	I1212 21:10:37.065144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065286   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.068118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068517   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.068548   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.068980   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069141   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069312   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.069507   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.069958   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.069985   60628 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-343495 && echo "no-preload-343495" | sudo tee /etc/hostname
	I1212 21:10:37.212905   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-343495
	
	I1212 21:10:37.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.215789   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216147   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.216182   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.216525   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216704   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216877   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.217037   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.217425   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.217444   60628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-343495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-343495/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-343495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:37.355687   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:37.355721   60628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:37.355754   60628 buildroot.go:174] setting up certificates
	I1212 21:10:37.355767   60628 provision.go:83] configureAuth start
	I1212 21:10:37.355780   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.356089   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.359197   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359644   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.359717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359937   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.362695   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363043   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.363079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363251   60628 provision.go:138] copyHostCerts
	I1212 21:10:37.363316   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:37.363336   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:37.363410   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:37.363536   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:37.363549   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:37.363585   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:37.363671   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:37.363677   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:37.363703   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:37.363757   60628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.no-preload-343495 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-343495]
	I1212 21:10:37.526121   60628 provision.go:172] copyRemoteCerts
	I1212 21:10:37.526205   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:37.526234   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.529079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529425   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.529492   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529659   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.529850   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.530009   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.530153   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:37.632384   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:37.661242   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:10:37.689215   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:10:37.714781   60628 provision.go:86] duration metric: configureAuth took 358.999712ms
	I1212 21:10:37.714819   60628 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:37.715040   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:10:37.715144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.718379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.718815   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.718844   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.719212   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.719422   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719625   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719789   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.719975   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.720484   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.720519   60628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:38.062630   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:38.062660   60628 machine.go:91] provisioned docker machine in 997.934774ms
	I1212 21:10:38.062673   60628 start.go:300] post-start starting for "no-preload-343495" (driver="kvm2")
	I1212 21:10:38.062687   60628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:38.062707   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.062999   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:38.063033   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.065898   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066299   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.066331   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066626   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.066878   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.067063   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.067228   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.164612   60628 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:38.170132   60628 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:38.170162   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:38.170244   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:38.170351   60628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:38.170467   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:38.181959   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:38.208734   60628 start.go:303] post-start completed in 146.045424ms
	I1212 21:10:38.208762   60628 fix.go:56] fixHost completed within 24.051421131s
	I1212 21:10:38.208782   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.212118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212519   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.212551   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212732   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213124   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213268   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.213436   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:38.213801   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:38.213827   60628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:38.337185   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415438.279018484
	
	I1212 21:10:38.337225   60628 fix.go:206] guest clock: 1702415438.279018484
	I1212 21:10:38.337239   60628 fix.go:219] Guest: 2023-12-12 21:10:38.279018484 +0000 UTC Remote: 2023-12-12 21:10:38.208766005 +0000 UTC m=+370.324656490 (delta=70.252479ms)
	I1212 21:10:38.337264   60628 fix.go:190] guest clock delta is within tolerance: 70.252479ms
	I1212 21:10:38.337275   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 24.179969571s
	I1212 21:10:38.337305   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.337527   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:38.340658   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341019   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.341053   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341233   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.341952   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342179   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342291   60628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:38.342336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.342388   60628 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:38.342413   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.345379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345419   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345762   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345809   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345841   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345864   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.346049   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346055   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346433   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346438   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346597   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.346596   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.467200   60628 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:38.475578   60628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:38.627838   60628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:38.634520   60628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:38.634614   60628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:38.654823   60628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:38.654847   60628 start.go:475] detecting cgroup driver to use...
	I1212 21:10:38.654928   60628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:38.673550   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:38.691252   60628 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:38.691318   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:38.707542   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:38.724686   60628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:38.843033   60628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:38.973535   60628 docker.go:219] disabling docker service ...
	I1212 21:10:38.973610   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:38.987940   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:39.001346   60628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:39.105401   60628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:39.209198   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:39.222268   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:39.243154   60628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:39.243226   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.253418   60628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:39.253497   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.263273   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.274546   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.284359   60628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:39.294828   60628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:39.304818   60628 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:39.304894   60628 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:39.318541   60628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:39.328819   60628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:39.439285   60628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:39.619385   60628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:39.619462   60628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:39.625279   60628 start.go:543] Will wait 60s for crictl version
	I1212 21:10:39.625358   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:39.630234   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:39.680505   60628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:39.680579   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.736272   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.796111   60628 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:10:39.732208   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.732243   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.732258   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.761735   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.761771   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.984129   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.990620   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:39.990650   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.484444   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.492006   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:40.492039   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.983459   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.990813   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:10:41.001024   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:10:41.001055   61298 api_server.go:131] duration metric: took 4.518922579s to wait for apiserver health ...
	I1212 21:10:41.001070   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:41.001078   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:41.003043   61298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:41.004669   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:41.084775   61298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:41.173688   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:41.201100   61298 system_pods.go:59] 9 kube-system pods found
	I1212 21:10:41.201132   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201140   61298 system_pods.go:61] "coredns-5dd5756b68-hc52p" [f8895d1e-3484-4ffe-9d11-f5e4b7617c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201148   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:10:41.201158   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:10:41.201165   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:10:41.201171   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:10:41.201177   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:10:41.201182   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:10:41.201187   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:10:41.201193   61298 system_pods.go:74] duration metric: took 27.476871ms to wait for pod list to return data ...
	I1212 21:10:41.201203   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:41.205597   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:41.205624   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:41.205638   61298 node_conditions.go:105] duration metric: took 4.431218ms to run NodePressure ...
	I1212 21:10:41.205653   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:41.516976   61298 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529555   61298 kubeadm.go:787] kubelet initialised
	I1212 21:10:41.529592   61298 kubeadm.go:788] duration metric: took 12.533051ms waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529601   61298 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:41.538991   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.546618   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546645   61298 pod_ready.go:81] duration metric: took 7.620954ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.546658   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546667   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.556921   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556951   61298 pod_ready.go:81] duration metric: took 10.273719ms waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.556963   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556972   61298 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.563538   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563570   61298 pod_ready.go:81] duration metric: took 6.584443ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.563586   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563598   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.578973   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579009   61298 pod_ready.go:81] duration metric: took 15.402148ms waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.579025   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579046   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.978938   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978972   61298 pod_ready.go:81] duration metric: took 399.914995ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.978990   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978999   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:38.930743   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:41.429587   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:39.798106   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:39.800962   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801364   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:39.801399   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801592   60628 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:39.806328   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:39.821949   60628 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:10:39.822014   60628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:39.873704   60628 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:10:39.873733   60628 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:10:39.873820   60628 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.873840   60628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.873859   60628 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:39.874021   60628 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.874062   60628 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 21:10:39.874043   60628 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.873836   60628 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.874359   60628 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.875369   60628 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875379   60628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.875390   60628 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 21:10:39.875428   60628 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.875284   60628 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.875803   60628 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.060906   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.061267   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.063065   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.074673   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 21:10:40.076082   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.080787   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.108962   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.169237   60628 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 21:10:40.169289   60628 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.169363   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.172419   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.251588   60628 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 21:10:40.251638   60628 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.251684   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.264051   60628 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 21:10:40.264146   60628 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.264227   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397546   60628 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 21:10:40.397590   60628 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.397640   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397669   60628 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 21:10:40.397709   60628 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.397774   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397876   60628 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 21:10:40.397978   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.398033   60628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:10:40.398064   60628 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.398079   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.398105   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397976   60628 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.398142   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.398143   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.418430   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.418500   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.530581   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530693   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.530781   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530584   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 21:10:40.530918   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:40.544770   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.544970   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 21:10:40.545108   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:40.567016   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567130   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567196   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.567297   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.604461   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 21:10:40.604484   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.604531   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 21:10:40.604488   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604644   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604590   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.612665   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 21:10:40.612741   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612794   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612800   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 21:10:40.612935   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:40.615786   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 21:10:42.378453   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378486   61298 pod_ready.go:81] duration metric: took 399.478547ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.378499   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378508   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:42.778834   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778871   61298 pod_ready.go:81] duration metric: took 400.345358ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.778887   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778897   61298 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:43.179851   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179879   61298 pod_ready.go:81] duration metric: took 400.97377ms waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:43.179891   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179898   61298 pod_ready.go:38] duration metric: took 1.6502873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:43.179913   61298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:10:43.196087   61298 ops.go:34] apiserver oom_adj: -16
	I1212 21:10:43.196114   61298 kubeadm.go:640] restartCluster took 20.701074763s
	I1212 21:10:43.196126   61298 kubeadm.go:406] StartCluster complete in 20.766085453s
	I1212 21:10:43.196146   61298 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.196225   61298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:10:43.198844   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.199122   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:10:43.199268   61298 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:10:43.199342   61298 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199363   61298 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199372   61298 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:10:43.199396   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:43.199456   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199373   61298 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199492   61298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-171828"
	I1212 21:10:43.199389   61298 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199551   61298 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199568   61298 addons.go:240] addon metrics-server should already be in state true
	I1212 21:10:43.199637   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199891   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199915   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.199922   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199945   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.200148   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.200177   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.218067   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I1212 21:10:43.218679   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1212 21:10:43.218817   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219111   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219234   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1212 21:10:43.219356   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219372   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219590   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219607   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219699   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219807   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220061   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220258   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.220278   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.220324   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.220436   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.220488   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.220676   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.221418   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.221444   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.224718   61298 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.224742   61298 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:10:43.224769   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.225189   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.225227   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.225431   61298 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-171828" context rescaled to 1 replicas
	I1212 21:10:43.225467   61298 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:10:43.228523   61298 out.go:177] * Verifying Kubernetes components...
	I1212 21:10:43.230002   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:10:43.239165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1212 21:10:43.239749   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.240357   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.240383   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.240761   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.240937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.241446   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1212 21:10:43.241951   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.242522   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.242541   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.242864   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.242931   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.244753   61298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:43.243219   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.246309   61298 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.246332   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:10:43.246358   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.248809   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.250840   61298 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:10:43.252430   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:10:43.251041   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.250309   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.247068   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I1212 21:10:43.252596   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:10:43.252622   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.252718   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.252745   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.253368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.253677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.253846   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.254434   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.259686   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.259697   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.259748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259844   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.259883   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.259973   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.260149   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.260361   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.260420   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.261546   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.261594   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.284357   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I1212 21:10:43.284945   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.285431   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.285444   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.286009   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.286222   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.288257   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.288542   61298 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.288565   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:10:43.288586   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.291842   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.292527   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.292680   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.293076   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.293350   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.293512   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.293683   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.405154   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.426115   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:10:43.426141   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:10:43.486953   61298 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:10:43.486975   61298 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:43.491689   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:10:43.491709   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:10:43.505611   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.538745   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:43.538785   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:10:43.600598   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:44.933368   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.528176624s)
	I1212 21:10:44.933442   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.427784857s)
	I1212 21:10:44.933493   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933511   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933539   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332913009s)
	I1212 21:10:44.933496   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933559   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933566   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933569   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933926   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.933943   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933944   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.933955   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933964   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933974   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934081   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934096   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934118   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934120   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934127   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934132   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934138   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934156   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934372   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934397   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934401   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934808   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934845   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934858   61298 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-171828"
	I1212 21:10:44.937727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.937783   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.937806   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.945948   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.945966   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.946202   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.946220   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.949385   61298 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1212 21:10:43.688668   60948 retry.go:31] will retry after 13.919612963s: kubelet not initialised
	I1212 21:10:44.951009   61298 addons.go:502] enable addons completed in 1.751742212s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1212 21:10:45.583280   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.432062   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:45.929995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.305027541s)
	I1212 21:10:43.909740   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.296738263s)
	I1212 21:10:43.909764   60628 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:43.909770   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 21:10:43.909810   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:45.879475   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969630074s)
	I1212 21:10:45.879502   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 21:10:45.879527   60628 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:45.879592   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:47.584004   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:50.113807   61298 node_ready.go:49] node "default-k8s-diff-port-171828" has status "Ready":"True"
	I1212 21:10:50.113837   61298 node_ready.go:38] duration metric: took 6.626786171s waiting for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:50.113850   61298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:50.128903   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656130   61298 pod_ready.go:92] pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:50.656153   61298 pod_ready.go:81] duration metric: took 527.212389ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656161   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:47.931716   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.433176   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.267864   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.388242252s)
	I1212 21:10:50.267898   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 21:10:50.267931   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:50.267977   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:52.845895   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.577890173s)
	I1212 21:10:52.845935   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 21:10:52.845969   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.846023   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.677971   61298 pod_ready.go:102] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:53.179154   61298 pod_ready.go:92] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.179186   61298 pod_ready.go:81] duration metric: took 2.523018353s waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.179200   61298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185649   61298 pod_ready.go:92] pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.185673   61298 pod_ready.go:81] duration metric: took 6.463925ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185685   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193280   61298 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.193303   61298 pod_ready.go:81] duration metric: took 1.00761061s waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193313   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484196   61298 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.484223   61298 pod_ready.go:81] duration metric: took 290.902142ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484240   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883746   61298 pod_ready.go:92] pod "kube-proxy-47qmb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.883773   61298 pod_ready.go:81] duration metric: took 399.524854ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883784   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283637   61298 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:55.283670   61298 pod_ready.go:81] duration metric: took 399.871874ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283684   61298 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:52.931372   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.932174   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.204367   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.358317317s)
	I1212 21:10:54.204393   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 21:10:54.204425   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:54.204485   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:56.066774   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.862261726s)
	I1212 21:10:56.066802   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 21:10:56.066825   60628 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:56.066874   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:57.118959   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.052055479s)
	I1212 21:10:57.118985   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 21:10:57.119009   60628 cache_images.go:123] Successfully loaded all cached images
	I1212 21:10:57.119021   60628 cache_images.go:92] LoadImages completed in 17.245274715s
	I1212 21:10:57.119103   60628 ssh_runner.go:195] Run: crio config
	I1212 21:10:57.180068   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:10:57.180093   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:57.180109   60628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:57.180127   60628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-343495 NodeName:no-preload-343495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:57.180250   60628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-343495"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:57.180330   60628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-343495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:10:57.180382   60628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:10:57.191949   60628 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:57.192034   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:57.202921   60628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 21:10:57.219512   60628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:10:57.235287   60628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1212 21:10:57.252278   60628 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:57.256511   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:57.268744   60628 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495 for IP: 192.168.61.176
	I1212 21:10:57.268781   60628 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:57.268959   60628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:57.269032   60628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:57.269133   60628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/client.key
	I1212 21:10:57.269228   60628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key.492ad1cf
	I1212 21:10:57.269316   60628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key
	I1212 21:10:57.269466   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:57.269511   60628 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:57.269526   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:57.269562   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:57.269597   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:57.269629   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:57.269685   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:57.270311   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:57.295960   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:10:57.320157   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:57.344434   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:57.368906   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:57.391830   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:57.415954   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:57.441182   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:57.465055   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:57.489788   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:57.513828   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:57.536138   60628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:57.553168   60628 ssh_runner.go:195] Run: openssl version
	I1212 21:10:57.558771   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:57.570141   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574935   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574990   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.580985   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:57.592528   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:57.603477   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608448   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608511   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.614316   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:57.625667   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:57.637284   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642258   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642323   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.648072   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:57.659762   60628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:57.664517   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:57.670385   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:57.676336   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:57.682074   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:57.688387   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:57.694542   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:57.700400   60628 kubeadm.go:404] StartCluster: {Name:no-preload-343495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:57.700520   60628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:57.700576   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:57.738703   60628 cri.go:89] found id: ""
	I1212 21:10:57.738776   60628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:57.749512   60628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:57.749538   60628 kubeadm.go:636] restartCluster start
	I1212 21:10:57.749610   60628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:57.758905   60628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.760000   60628 kubeconfig.go:92] found "no-preload-343495" server: "https://192.168.61.176:8443"
	I1212 21:10:57.762219   60628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:57.773107   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.773181   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.785478   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.785500   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.785554   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.797412   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.613799   60948 retry.go:31] will retry after 13.009137494s: kubelet not initialised
	I1212 21:10:57.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.591232   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:02.093666   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:57.429861   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.429944   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:01.438267   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:58.297630   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.297712   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.312155   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:58.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.797652   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.809726   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.297677   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.309875   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.798441   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.798531   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.810533   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.298154   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.298237   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.310050   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.797683   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.809712   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.298094   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.298224   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.310181   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.797635   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.797742   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.809336   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.297912   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.297997   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.309215   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.797666   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.797749   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.808815   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.590426   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.590850   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.929977   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.429697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.297975   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.298066   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.308865   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:03.798103   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.798207   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.809553   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.297580   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.297653   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.309100   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.797646   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.797767   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.809269   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.297665   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.309281   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.797809   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.797898   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.809794   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.298381   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.298497   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.309467   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.798050   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.798132   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.809758   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.298354   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:07.298434   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:07.309655   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.773157   60628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:11:07.773216   60628 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:11:07.773229   60628 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:11:07.773290   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:11:07.815986   60628 cri.go:89] found id: ""
	I1212 21:11:07.816068   60628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:11:07.832950   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:11:07.842287   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:11:07.842353   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851694   60628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851720   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:10.630075   60948 kubeadm.go:787] kubelet initialised
	I1212 21:11:10.630105   60948 kubeadm.go:788] duration metric: took 47.146743334s waiting for restarted kubelet to initialise ...
	I1212 21:11:10.630116   60948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:10.637891   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644674   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.644700   60948 pod_ready.go:81] duration metric: took 6.771094ms waiting for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644710   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651801   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.651830   60948 pod_ready.go:81] duration metric: took 7.112566ms waiting for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651845   60948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659678   60948 pod_ready.go:92] pod "etcd-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.659700   60948 pod_ready.go:81] duration metric: took 7.845111ms waiting for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659711   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665929   60948 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.665958   60948 pod_ready.go:81] duration metric: took 6.237833ms waiting for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665972   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028938   60948 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.028961   60948 pod_ready.go:81] duration metric: took 362.981718ms waiting for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028973   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428824   60948 pod_ready.go:92] pod "kube-proxy-5mvzb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.428853   60948 pod_ready.go:81] duration metric: took 399.87314ms waiting for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428866   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828546   60948 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.828578   60948 pod_ready.go:81] duration metric: took 399.696769ms waiting for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828590   60948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:09.094309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:11.098257   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:08.928635   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:10.929896   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:07.988857   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.772924   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.980401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.108938   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.189716   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:11:09.189780   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.201432   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.722085   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.222325   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.721931   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.222186   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.721642   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.745977   60628 api_server.go:72] duration metric: took 2.556260463s to wait for apiserver process to appear ...
	I1212 21:11:11.746005   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:11:11.746025   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:14.135897   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.138482   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:13.590920   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.591230   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:12.931314   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.429327   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.294367   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.294401   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.294413   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.347744   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.347780   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.848435   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.853773   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:16.853823   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.348312   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.359543   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.359579   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.848425   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.853966   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.854006   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:18.348644   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:18.373028   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:11:18.385301   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:11:18.385341   60628 api_server.go:131] duration metric: took 6.639327054s to wait for apiserver health ...
	I1212 21:11:18.385353   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:11:18.385362   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:11:18.387289   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:11:18.636422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:20.636472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.592197   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.593157   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:21.594049   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.434254   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.930697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:18.388998   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:11:18.449634   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:11:18.491001   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:11:18.517694   60628 system_pods.go:59] 8 kube-system pods found
	I1212 21:11:18.517729   60628 system_pods.go:61] "coredns-76f75df574-s9jgn" [b13d32b4-a44b-4f79-bece-d0adafef4c7c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:11:18.517740   60628 system_pods.go:61] "etcd-no-preload-343495" [ad48db04-9c79-48e9-a001-1a9061c43cb9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:11:18.517754   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [24d024c1-a89f-4ede-8dbf-7502f0179cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:11:18.517760   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [10ce49e3-2679-4ac5-89aa-9179582ae778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:11:18.517765   60628 system_pods.go:61] "kube-proxy-492l6" [3a2bbe46-0506-490f-aae8-a97e48f3205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:11:18.517773   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [bca80470-c204-4a34-9c7d-5de3ad382c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:11:18.517778   60628 system_pods.go:61] "metrics-server-57f55c9bc5-tmmk4" [11066021-353e-418e-9c7f-78e72dae44a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:11:18.517785   60628 system_pods.go:61] "storage-provisioner" [e681d4cd-f2f6-4cf3-ba09-0f361a64aafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:11:18.517794   60628 system_pods.go:74] duration metric: took 26.756848ms to wait for pod list to return data ...
	I1212 21:11:18.517815   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:11:18.521330   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:11:18.521362   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:11:18.521377   60628 node_conditions.go:105] duration metric: took 3.557177ms to run NodePressure ...
	I1212 21:11:18.521401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:18.945267   60628 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958848   60628 kubeadm.go:787] kubelet initialised
	I1212 21:11:18.958877   60628 kubeadm.go:788] duration metric: took 13.578451ms waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958886   60628 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:18.964819   60628 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:20.987111   60628 pod_ready.go:102] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.494268   60628 pod_ready.go:92] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:22.494299   60628 pod_ready.go:81] duration metric: took 3.529452237s waiting for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:22.494311   60628 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:23.136140   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:25.635800   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.093215   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.590861   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.429921   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.928565   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.929668   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.514490   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.013783   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.637165   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:30.133948   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.091057   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.598428   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:28.930654   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.428436   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.514918   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.514945   60628 pod_ready.go:81] duration metric: took 7.020626508s waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.514955   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524669   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.524696   60628 pod_ready.go:81] duration metric: took 9.734059ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524709   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541808   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.541830   60628 pod_ready.go:81] duration metric: took 17.113672ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541839   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553955   60628 pod_ready.go:92] pod "kube-proxy-492l6" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.553979   60628 pod_ready.go:81] duration metric: took 12.134143ms waiting for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553988   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562798   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.562835   60628 pod_ready.go:81] duration metric: took 8.836628ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562850   60628 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:31.818614   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:32.134558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.135376   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.634429   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.090158   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.091290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.429336   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:35.430448   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.819222   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.318847   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.637527   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:41.134980   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.115262   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.591502   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:37.929700   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:39.929830   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.318911   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.319619   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.319750   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.135558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.635174   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.090309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.590529   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.434126   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.931810   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.818997   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.321699   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.635472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.636294   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.640471   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.590577   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.590885   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.591122   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.429836   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.431518   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.928631   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.823419   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:52.319752   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.137390   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.634152   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.593196   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.089777   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.929750   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:55.932860   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.321554   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.819877   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.136605   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.092816   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.591682   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.429543   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.432255   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:59.318053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.325068   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.137023   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.635397   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.091397   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.094195   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:02.933370   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.430020   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.819751   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:06.319806   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.137648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.635154   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.591471   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.091503   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.430684   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:09.929393   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.319984   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.821053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.637206   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.136850   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.590992   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.591391   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.591744   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.429299   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.429724   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.430114   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:13.329939   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.820117   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.820519   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.199675   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.635179   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.635426   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.091628   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.091739   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:18.929340   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.929933   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.319134   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.819399   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:24.133408   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:26.134293   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:23.093543   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.591828   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.930710   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.434148   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.319949   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.337078   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.134422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.137461   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.090730   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.092555   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.928685   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.929200   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.929272   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.819461   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.819541   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.633893   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.636198   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.636373   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.590019   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.590953   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.591420   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.929488   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:35.929671   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.819661   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.322177   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.137315   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.635168   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.097607   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.590836   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:37.930820   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.930916   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:38.324332   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:40.819395   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.819784   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.640489   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.134648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.590910   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.592083   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.429717   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:44.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.431053   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.320122   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:47.819547   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.135328   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.137213   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.091979   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.093149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.929529   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:51.428177   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.319560   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.820242   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.635136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.637000   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.591430   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.090634   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:53.429307   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.429455   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.821647   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.319971   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.135608   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.137606   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.634197   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.590565   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:00.091074   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.429785   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.928834   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.818255   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.819526   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:03.635008   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.134591   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.591023   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.592260   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.092331   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.430411   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.930385   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.326885   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.822828   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:08.135379   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:10.136957   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.590114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.593478   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.434219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.929736   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.930477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.322955   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.819793   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:12.137554   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.635349   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.637857   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.092558   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.591772   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.429362   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.931219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.319867   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.325224   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.135196   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.634789   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.090842   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.591235   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.929464   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:18.326463   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:20.819839   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:22.820060   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.636879   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.135188   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.591676   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.591833   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.929811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.429286   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.319356   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.819668   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.634130   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.635441   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.591961   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.090560   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:32.091429   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.929344   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.929561   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:29.820548   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:31.820901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.134798   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.635317   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.094290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.589895   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.429811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.429995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.319447   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.822690   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.636833   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.136281   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:38.591586   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.090302   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.929337   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.428532   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:39.321656   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.820917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.635037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.135037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:43.091587   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.590322   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.429616   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.430483   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.431960   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.319403   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.326448   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.136136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:49.635013   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.635308   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.592114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:50.089825   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:52.090721   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.928619   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.429031   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.820121   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.319794   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.134872   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:54.589746   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.590432   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.429817   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:55.929211   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.820666   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.322986   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.135622   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.139553   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.592602   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:01.091154   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:57.929777   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:59.930300   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.818901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.819587   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.634488   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.636059   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:03.591886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:06.091886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.432472   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.125384   60833 pod_ready.go:81] duration metric: took 4m0.000960425s waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:05.125428   60833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:05.125437   60833 pod_ready.go:38] duration metric: took 4m2.799403108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:05.125453   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:05.125518   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:05.125592   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:05.203017   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:05.203045   60833 cri.go:89] found id: ""
	I1212 21:14:05.203054   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:05.203115   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.208622   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:05.208693   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:05.250079   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.250102   60833 cri.go:89] found id: ""
	I1212 21:14:05.250118   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:05.250161   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.254870   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:05.254946   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:05.323718   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:05.323748   60833 cri.go:89] found id: ""
	I1212 21:14:05.323757   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:05.323819   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.328832   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:05.328902   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:05.372224   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.372252   60833 cri.go:89] found id: ""
	I1212 21:14:05.372262   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:05.372316   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.377943   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:05.378007   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:05.417867   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:05.417894   60833 cri.go:89] found id: ""
	I1212 21:14:05.417905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:05.417961   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.422198   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:05.422264   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:05.462031   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:05.462052   60833 cri.go:89] found id: ""
	I1212 21:14:05.462059   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:05.462114   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.466907   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:05.466962   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:05.512557   60833 cri.go:89] found id: ""
	I1212 21:14:05.512585   60833 logs.go:284] 0 containers: []
	W1212 21:14:05.512592   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:05.512597   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:05.512663   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:05.553889   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.553914   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:05.553921   60833 cri.go:89] found id: ""
	I1212 21:14:05.553929   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:05.553982   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.558864   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.563550   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:05.563572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:05.627093   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:05.627135   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:05.642800   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:05.642827   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:05.820642   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:05.820683   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.871256   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:05.871299   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.913399   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:05.913431   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.955061   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:05.955103   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:06.012639   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:06.012681   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:06.057933   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:06.057970   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:06.110367   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:06.110400   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:06.173711   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:06.173746   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:06.214291   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:06.214328   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:06.260105   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:06.260142   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:03.320010   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.321011   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.821313   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.134137   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.635405   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:08.591048   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:10.593286   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.219373   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:09.237985   60833 api_server.go:72] duration metric: took 4m14.403294004s to wait for apiserver process to appear ...
	I1212 21:14:09.238014   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:09.238057   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:09.238119   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:09.281005   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:09.281028   60833 cri.go:89] found id: ""
	I1212 21:14:09.281037   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:09.281097   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.285354   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:09.285436   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:09.336833   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.336864   60833 cri.go:89] found id: ""
	I1212 21:14:09.336874   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:09.336937   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.342850   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:09.342928   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:09.387107   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.387133   60833 cri.go:89] found id: ""
	I1212 21:14:09.387143   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:09.387202   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.392729   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:09.392806   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:09.433197   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:09.433225   60833 cri.go:89] found id: ""
	I1212 21:14:09.433232   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:09.433281   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.438043   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:09.438092   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:09.486158   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.486185   60833 cri.go:89] found id: ""
	I1212 21:14:09.486200   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:09.486255   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.491667   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:09.491735   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:09.536085   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:09.536108   60833 cri.go:89] found id: ""
	I1212 21:14:09.536114   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:09.536165   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.540939   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:09.541008   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:09.585160   60833 cri.go:89] found id: ""
	I1212 21:14:09.585187   60833 logs.go:284] 0 containers: []
	W1212 21:14:09.585195   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:09.585200   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:09.585254   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:09.628972   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:09.629001   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:09.629008   60833 cri.go:89] found id: ""
	I1212 21:14:09.629017   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:09.629075   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.634242   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.639308   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:09.639344   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:09.766299   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:09.766329   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.816655   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:09.816699   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.863184   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:09.863212   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.924345   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:09.924382   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:10.363852   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:10.363897   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:10.417375   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:10.417407   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:10.432758   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:10.432788   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:10.483732   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:10.483778   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:10.538254   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:10.538283   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:10.598142   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:10.598174   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:10.650678   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:10.650710   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:10.697971   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:10.698000   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:10.318636   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.321917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.134600   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:14.134822   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.634845   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.091008   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:15.589901   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.241720   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:14:13.248465   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:14:13.249814   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:14:13.249839   60833 api_server.go:131] duration metric: took 4.011816395s to wait for apiserver health ...
	I1212 21:14:13.249848   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:14:13.249871   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:13.249916   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:13.300138   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:13.300161   60833 cri.go:89] found id: ""
	I1212 21:14:13.300171   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:13.300228   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.306350   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:13.306424   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:13.358644   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.358667   60833 cri.go:89] found id: ""
	I1212 21:14:13.358676   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:13.358737   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.363921   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:13.363989   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:13.413339   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:13.413366   60833 cri.go:89] found id: ""
	I1212 21:14:13.413374   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:13.413420   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.418188   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:13.418248   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:13.461495   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:13.461522   60833 cri.go:89] found id: ""
	I1212 21:14:13.461532   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:13.461581   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.465878   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:13.465951   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:13.511866   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.511895   60833 cri.go:89] found id: ""
	I1212 21:14:13.511905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:13.511960   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.516312   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:13.516381   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:13.560993   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.561023   60833 cri.go:89] found id: ""
	I1212 21:14:13.561034   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:13.561092   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.565439   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:13.565514   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:13.608401   60833 cri.go:89] found id: ""
	I1212 21:14:13.608434   60833 logs.go:284] 0 containers: []
	W1212 21:14:13.608445   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:13.608452   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:13.608507   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:13.661929   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:13.661956   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:13.661963   60833 cri.go:89] found id: ""
	I1212 21:14:13.661972   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:13.662036   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.667039   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.671770   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:13.671791   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:13.793637   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:13.793671   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.844253   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:13.844286   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.886965   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:13.886997   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.946537   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:13.946572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:13.999732   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:13.999769   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:14.015819   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:14.015849   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:14.063649   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:14.063684   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:14.116465   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:14.116499   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:14.179838   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:14.179875   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:14.224213   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:14.224243   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:14.262832   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:14.262858   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:14.307981   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:14.308008   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:17.188864   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:14:17.188919   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.188927   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.188934   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.188943   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.188950   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.188959   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.188980   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.188988   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.188996   60833 system_pods.go:74] duration metric: took 3.939142839s to wait for pod list to return data ...
	I1212 21:14:17.189005   60833 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:14:17.192352   60833 default_sa.go:45] found service account: "default"
	I1212 21:14:17.192390   60833 default_sa.go:55] duration metric: took 3.37914ms for default service account to be created ...
	I1212 21:14:17.192400   60833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:14:17.198396   60833 system_pods.go:86] 8 kube-system pods found
	I1212 21:14:17.198424   60833 system_pods.go:89] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.198429   60833 system_pods.go:89] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.198433   60833 system_pods.go:89] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.198438   60833 system_pods.go:89] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.198442   60833 system_pods.go:89] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.198446   60833 system_pods.go:89] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.198455   60833 system_pods.go:89] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.198459   60833 system_pods.go:89] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.198466   60833 system_pods.go:126] duration metric: took 6.060971ms to wait for k8s-apps to be running ...
	I1212 21:14:17.198473   60833 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:14:17.198513   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:14:17.217190   60833 system_svc.go:56] duration metric: took 18.71037ms WaitForService to wait for kubelet.
	I1212 21:14:17.217224   60833 kubeadm.go:581] duration metric: took 4m22.382539055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:14:17.217249   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:14:17.221504   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:14:17.221540   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:14:17.221555   60833 node_conditions.go:105] duration metric: took 4.300742ms to run NodePressure ...
	I1212 21:14:17.221569   60833 start.go:228] waiting for startup goroutines ...
	I1212 21:14:17.221577   60833 start.go:233] waiting for cluster config update ...
	I1212 21:14:17.221594   60833 start.go:242] writing updated cluster config ...
	I1212 21:14:17.221939   60833 ssh_runner.go:195] Run: rm -f paused
	I1212 21:14:17.277033   60833 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:14:17.279044   60833 out.go:177] * Done! kubectl is now configured to use "embed-certs-831188" cluster and "default" namespace by default
	I1212 21:14:14.818262   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.823731   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:18.634990   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.135517   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:17.593149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:20.091419   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:22.091781   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:19.320462   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.819129   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.636400   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.134084   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:24.591552   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:27.090974   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.825879   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.318691   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.135741   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:30.635812   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:29.091882   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.590150   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.819815   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.319140   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.134738   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:35.637961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.591929   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.091976   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.819872   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.325409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.139066   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.635659   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:41.090674   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.819966   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.820281   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.135071   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.635762   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.091695   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.595126   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.323343   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.819822   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.134846   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.135229   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.092328   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.591470   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.319483   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.819702   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.135550   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:54.634163   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:56.634961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.593957   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.091338   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.284411   61298 pod_ready.go:81] duration metric: took 4m0.000712263s waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:55.284453   61298 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:55.284462   61298 pod_ready.go:38] duration metric: took 4m5.170596318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:55.284486   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:55.284536   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:55.284595   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:55.345012   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:55.345043   61298 cri.go:89] found id: ""
	I1212 21:14:55.345055   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:55.345118   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.350261   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:55.350329   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:55.403088   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:55.403116   61298 cri.go:89] found id: ""
	I1212 21:14:55.403124   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:55.403169   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.408043   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:55.408103   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:55.449581   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:55.449608   61298 cri.go:89] found id: ""
	I1212 21:14:55.449615   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:55.449670   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.454762   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:55.454828   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:55.502919   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:55.502960   61298 cri.go:89] found id: ""
	I1212 21:14:55.502970   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:55.503050   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.508035   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:55.508101   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:55.550335   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:55.550365   61298 cri.go:89] found id: ""
	I1212 21:14:55.550376   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:55.550437   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.554985   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:55.555043   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:55.599678   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:55.599707   61298 cri.go:89] found id: ""
	I1212 21:14:55.599716   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:55.599772   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.604830   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:55.604913   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:55.651733   61298 cri.go:89] found id: ""
	I1212 21:14:55.651767   61298 logs.go:284] 0 containers: []
	W1212 21:14:55.651774   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:55.651779   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:55.651825   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:55.690691   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:55.690716   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.690723   61298 cri.go:89] found id: ""
	I1212 21:14:55.690732   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:55.690778   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.695227   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.699700   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:14:55.699723   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:55.751176   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:14:55.751210   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.789388   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:55.789417   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:56.270250   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:56.270296   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:56.315517   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:56.315549   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:56.377591   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:14:56.377648   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:56.432089   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:56.432124   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:56.496004   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:56.496038   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:56.543979   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:56.544010   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:56.599613   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:14:56.599644   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:56.646113   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:14:56.646146   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:56.693081   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:56.693111   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:56.709557   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:56.709591   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:53.319742   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.320811   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:57.820478   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.134092   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:01.135233   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.366965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:59.385251   61298 api_server.go:72] duration metric: took 4m16.159743319s to wait for apiserver process to appear ...
	I1212 21:14:59.385280   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:59.385317   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:59.385365   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:59.433011   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:59.433038   61298 cri.go:89] found id: ""
	I1212 21:14:59.433047   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:59.433088   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.438059   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:59.438136   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:59.477000   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.477078   61298 cri.go:89] found id: ""
	I1212 21:14:59.477093   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:59.477153   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.481716   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:59.481791   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:59.526936   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.526966   61298 cri.go:89] found id: ""
	I1212 21:14:59.526975   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:59.527037   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.535907   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:59.535985   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:59.580818   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:59.580848   61298 cri.go:89] found id: ""
	I1212 21:14:59.580856   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:59.580916   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.585685   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:59.585733   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:59.640697   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:59.640721   61298 cri.go:89] found id: ""
	I1212 21:14:59.640731   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:59.640798   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.644940   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:59.645004   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:59.687873   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.687901   61298 cri.go:89] found id: ""
	I1212 21:14:59.687910   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:59.687960   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.692382   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:59.692442   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:59.735189   61298 cri.go:89] found id: ""
	I1212 21:14:59.735225   61298 logs.go:284] 0 containers: []
	W1212 21:14:59.735235   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:59.735256   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:59.735323   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:59.778668   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:59.778702   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:59.778708   61298 cri.go:89] found id: ""
	I1212 21:14:59.778717   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:59.778773   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.782827   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.787277   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:59.787303   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:59.802470   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:59.802499   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.864191   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:59.864225   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.911007   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:59.911032   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.975894   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:59.975932   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:00.021750   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:00.021780   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:00.061527   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:00.061557   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:00.484318   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:00.484359   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:00.549321   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:00.549357   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:00.600589   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:00.600629   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:00.643660   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:00.643690   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:00.698016   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:00.698047   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:00.741819   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:00.741850   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:00.319685   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:02.320017   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.136545   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:05.635632   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.383318   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:15:03.389750   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:15:03.391084   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:15:03.391117   61298 api_server.go:131] duration metric: took 4.005829911s to wait for apiserver health ...
	I1212 21:15:03.391155   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:03.391181   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:15:03.391262   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:15:03.438733   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:03.438754   61298 cri.go:89] found id: ""
	I1212 21:15:03.438762   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:15:03.438809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.443133   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:15:03.443203   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:15:03.488960   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.488990   61298 cri.go:89] found id: ""
	I1212 21:15:03.489001   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:15:03.489058   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.493741   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:15:03.493807   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:15:03.541286   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:03.541316   61298 cri.go:89] found id: ""
	I1212 21:15:03.541325   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:15:03.541387   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.545934   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:15:03.546008   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:15:03.585937   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.585962   61298 cri.go:89] found id: ""
	I1212 21:15:03.585971   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:15:03.586039   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.590444   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:15:03.590516   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:15:03.626793   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:03.626826   61298 cri.go:89] found id: ""
	I1212 21:15:03.626835   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:15:03.626894   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.631829   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:15:03.631906   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:15:03.676728   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:03.676750   61298 cri.go:89] found id: ""
	I1212 21:15:03.676758   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:15:03.676809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.681068   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:15:03.681123   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:15:03.723403   61298 cri.go:89] found id: ""
	I1212 21:15:03.723430   61298 logs.go:284] 0 containers: []
	W1212 21:15:03.723437   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:15:03.723442   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:15:03.723502   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:15:03.772837   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.772868   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.772875   61298 cri.go:89] found id: ""
	I1212 21:15:03.772884   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:15:03.772940   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.777274   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.782354   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:15:03.782379   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.823892   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:03.823919   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.866656   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:15:03.866689   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.920757   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:03.920798   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.963737   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:03.963766   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:04.005559   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:15:04.005582   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:04.054868   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:04.054901   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:04.118941   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:04.118973   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:04.188272   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:15:04.188314   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:04.230473   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:15:04.230502   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:15:04.247069   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:04.247097   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:04.425844   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:04.425877   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:04.492751   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:04.492789   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:07.374768   61298 system_pods.go:59] 8 kube-system pods found
	I1212 21:15:07.374796   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.374801   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.374806   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.374810   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.374814   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.374818   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.374823   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.374828   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.374835   61298 system_pods.go:74] duration metric: took 3.983674471s to wait for pod list to return data ...
	I1212 21:15:07.374842   61298 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:07.377370   61298 default_sa.go:45] found service account: "default"
	I1212 21:15:07.377391   61298 default_sa.go:55] duration metric: took 2.542814ms for default service account to be created ...
	I1212 21:15:07.377400   61298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:07.384723   61298 system_pods.go:86] 8 kube-system pods found
	I1212 21:15:07.384751   61298 system_pods.go:89] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.384758   61298 system_pods.go:89] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.384767   61298 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.384776   61298 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.384782   61298 system_pods.go:89] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.384789   61298 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.384800   61298 system_pods.go:89] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.384809   61298 system_pods.go:89] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.384824   61298 system_pods.go:126] duration metric: took 7.416446ms to wait for k8s-apps to be running ...
	I1212 21:15:07.384838   61298 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:15:07.384893   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:07.402316   61298 system_svc.go:56] duration metric: took 17.466449ms WaitForService to wait for kubelet.
	I1212 21:15:07.402350   61298 kubeadm.go:581] duration metric: took 4m24.176848962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:15:07.402367   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:15:07.405566   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:15:07.405598   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:15:07.405616   61298 node_conditions.go:105] duration metric: took 3.244651ms to run NodePressure ...
	I1212 21:15:07.405628   61298 start.go:228] waiting for startup goroutines ...
	I1212 21:15:07.405637   61298 start.go:233] waiting for cluster config update ...
	I1212 21:15:07.405649   61298 start.go:242] writing updated cluster config ...
	I1212 21:15:07.405956   61298 ssh_runner.go:195] Run: rm -f paused
	I1212 21:15:07.457339   61298 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:15:07.459492   61298 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-171828" cluster and "default" namespace by default
	I1212 21:15:04.820409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:07.323802   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:08.135943   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:10.633863   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:11.829177   60948 pod_ready.go:81] duration metric: took 4m0.000566874s waiting for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:11.829231   60948 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:11.829268   60948 pod_ready.go:38] duration metric: took 4m1.1991406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:11.829314   60948 kubeadm.go:640] restartCluster took 5m11.909727716s
	W1212 21:15:11.829387   60948 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:11.829425   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:09.824487   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:12.319761   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:14.818898   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:16.822843   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:18.398899   60948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.569443116s)
	I1212 21:15:18.398988   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:18.421423   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:18.437661   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:18.459692   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:18.459747   60948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 21:15:18.529408   60948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1212 21:15:18.529485   60948 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:18.690865   60948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:18.691034   60948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:18.691165   60948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:18.939806   60948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:18.939966   60948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:18.949943   60948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1212 21:15:19.070931   60948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:19.072676   60948 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:19.072783   60948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:19.072868   60948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:19.072976   60948 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:19.073053   60948 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:19.073145   60948 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:19.073253   60948 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:19.073367   60948 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:19.073461   60948 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:19.073562   60948 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:19.073669   60948 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:19.073732   60948 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:19.073833   60948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:19.136565   60948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:19.614416   60948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:19.754535   60948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:20.149412   60948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:20.150707   60948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:20.152444   60948 out.go:204]   - Booting up control plane ...
	I1212 21:15:20.152579   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:20.158445   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:20.162012   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:20.162125   60948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:20.163852   60948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:19.321950   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:21.334725   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:23.820711   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:26.320918   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.174689   60948 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.007313 seconds
	I1212 21:15:29.174814   60948 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:29.189641   60948 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:29.715080   60948 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:29.715312   60948 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-372099 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 21:15:30.225103   60948 kubeadm.go:322] [bootstrap-token] Using token: h843b5.c34afz2u52stqeoc
	I1212 21:15:30.226707   60948 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:30.226873   60948 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:30.237412   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:30.245755   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:30.252764   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:30.259184   60948 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:30.405726   60948 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:30.647756   60948 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:30.647812   60948 kubeadm.go:322] 
	I1212 21:15:30.647908   60948 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:30.647920   60948 kubeadm.go:322] 
	I1212 21:15:30.648030   60948 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:30.648040   60948 kubeadm.go:322] 
	I1212 21:15:30.648076   60948 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:30.648155   60948 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:30.648219   60948 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:30.648229   60948 kubeadm.go:322] 
	I1212 21:15:30.648358   60948 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:30.648477   60948 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:30.648571   60948 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:30.648582   60948 kubeadm.go:322] 
	I1212 21:15:30.648698   60948 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1212 21:15:30.648813   60948 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:30.648824   60948 kubeadm.go:322] 
	I1212 21:15:30.648920   60948 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649052   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:30.649101   60948 kubeadm.go:322]     --control-plane 	  
	I1212 21:15:30.649111   60948 kubeadm.go:322] 
	I1212 21:15:30.649205   60948 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:30.649214   60948 kubeadm.go:322] 
	I1212 21:15:30.649313   60948 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649435   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:30.649933   60948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:30.649961   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:15:30.649971   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:30.651531   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:30.652689   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:30.663574   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:30.686618   60948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:30.686690   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.686692   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=old-k8s-version-372099 minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.707974   60948 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:30.909886   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.037212   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.641453   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:28.819896   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.562965   60628 pod_ready.go:81] duration metric: took 4m0.000097626s waiting for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:29.563010   60628 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:29.563041   60628 pod_ready.go:38] duration metric: took 4m10.604144973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:29.563066   60628 kubeadm.go:640] restartCluster took 4m31.813522594s
	W1212 21:15:29.563127   60628 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:29.563156   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:32.141066   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:32.640787   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.140569   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.640785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.140535   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.641063   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.140492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.640819   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.140748   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.640647   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.141492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.641109   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.140524   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.641401   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.141549   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.641304   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.141537   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.641149   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.141391   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.640949   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.000355   60628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.437170953s)
	I1212 21:15:44.000430   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:44.014718   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:44.025263   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:44.035086   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:44.035133   60628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 21:15:44.089390   60628 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 21:15:44.089499   60628 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:44.275319   60628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:44.275496   60628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:44.275594   60628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:44.529521   60628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:42.141256   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:42.640563   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.140785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.640773   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.141155   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.641415   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.140534   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.641492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.141203   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.259301   60948 kubeadm.go:1088] duration metric: took 15.572687129s to wait for elevateKubeSystemPrivileges.
	I1212 21:15:46.259339   60948 kubeadm.go:406] StartCluster complete in 5m46.398198596s
	I1212 21:15:46.259364   60948 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.259455   60948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:15:46.261128   60948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.261410   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:15:46.261582   60948 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:15:46.261654   60948 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261676   60948 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-372099"
	W1212 21:15:46.261691   60948 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:15:46.261690   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:15:46.261729   60948 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261739   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.261745   60948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-372099"
	I1212 21:15:46.262128   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262150   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262176   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262204   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262371   60948 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-372099"
	I1212 21:15:46.262388   60948 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-372099"
	W1212 21:15:46.262396   60948 addons.go:240] addon metrics-server should already be in state true
	I1212 21:15:46.262431   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.262755   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262775   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.280829   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 21:15:46.281025   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1212 21:15:46.281167   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I1212 21:15:46.281451   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.282027   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282043   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282307   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282340   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282381   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282455   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282466   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.282760   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282816   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.283348   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283365   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283377   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.283388   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.286570   60948 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-372099"
	W1212 21:15:46.286591   60948 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:15:46.286618   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.287021   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.287041   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.300740   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1212 21:15:46.301674   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.301993   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I1212 21:15:46.302303   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.302317   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.302667   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.302772   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.302940   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.303112   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.303127   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.303537   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.304537   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.306285   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.308411   60948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:15:46.307398   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1212 21:15:46.307432   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.310694   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:15:46.310717   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:15:46.310737   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.311358   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.312839   60948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:15:44.530987   60628 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:44.531136   60628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:44.531267   60628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:44.531359   60628 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:44.531879   60628 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:44.532386   60628 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:44.533944   60628 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:44.535037   60628 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:44.536175   60628 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:44.537226   60628 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:44.537964   60628 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:44.538451   60628 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:44.538551   60628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:44.841462   60628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:45.059424   60628 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:15:45.613097   60628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:46.221274   60628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:46.372266   60628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:46.373199   60628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:46.376094   60628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:46.311872   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.314010   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.314158   60948 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.314170   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:15:46.314187   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.314387   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.314450   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.314958   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.314985   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.315221   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.315264   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.315563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.315745   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.315925   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.316191   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.322472   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324106   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.324142   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324390   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.324651   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.324861   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.325008   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.339982   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I1212 21:15:46.340365   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.340889   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.340915   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.341242   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.341434   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.343069   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.343366   60948 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.343384   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:15:46.343402   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.346212   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346596   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.346626   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346882   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.347322   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.347482   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.347618   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	W1212 21:15:46.380698   60948 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-372099" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:15:46.380724   60948 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1212 21:15:46.380745   60948 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:15:46.383175   60948 out.go:177] * Verifying Kubernetes components...
	I1212 21:15:46.384789   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:46.518292   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:15:46.518316   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:15:46.519393   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.554663   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.580810   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:15:46.580839   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:15:46.614409   60948 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.614501   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:15:46.628267   60948 node_ready.go:49] node "old-k8s-version-372099" has status "Ready":"True"
	I1212 21:15:46.628302   60948 node_ready.go:38] duration metric: took 13.858882ms waiting for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.628318   60948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:46.651927   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:46.651957   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:15:46.655191   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:46.734455   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:47.462832   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462859   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.462837   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462930   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465016   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465028   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465047   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465057   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465066   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465018   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465027   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465126   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465143   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465155   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465440   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465459   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465460   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465477   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465462   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465509   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.509931   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.509955   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.510242   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.510268   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.510289   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.529296   60948 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 21:15:47.740624   60948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006125978s)
	I1212 21:15:47.740686   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.740704   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.741066   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741082   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741104   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.741117   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741344   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741370   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741380   60948 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-372099"
	I1212 21:15:47.741382   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.743094   60948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:15:46.377620   60628 out.go:204]   - Booting up control plane ...
	I1212 21:15:46.377753   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:46.380316   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:46.381669   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:46.400406   60628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:46.401911   60628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:46.402016   60628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 21:15:46.577916   60628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:47.744911   60948 addons.go:502] enable addons completed in 1.483323446s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:15:48.879924   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:51.240011   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.081961   60628 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503798 seconds
	I1212 21:15:55.108753   60628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:55.132442   60628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:55.675426   60628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:55.675616   60628 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-343495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:15:56.197198   60628 kubeadm.go:322] [bootstrap-token] Using token: 6e6rca.dj99vsq9tzjoif3m
	I1212 21:15:56.198596   60628 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:56.198756   60628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:56.204758   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:15:56.217506   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:56.221482   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:56.225791   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:56.231024   60628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:56.249696   60628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:15:56.516070   60628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:56.613203   60628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:56.613227   60628 kubeadm.go:322] 
	I1212 21:15:56.613315   60628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:56.613340   60628 kubeadm.go:322] 
	I1212 21:15:56.613432   60628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:56.613447   60628 kubeadm.go:322] 
	I1212 21:15:56.613501   60628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:56.613588   60628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:56.613671   60628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:56.613682   60628 kubeadm.go:322] 
	I1212 21:15:56.613755   60628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 21:15:56.613762   60628 kubeadm.go:322] 
	I1212 21:15:56.613822   60628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:15:56.613832   60628 kubeadm.go:322] 
	I1212 21:15:56.613903   60628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:56.614004   60628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:56.614104   60628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:56.614116   60628 kubeadm.go:322] 
	I1212 21:15:56.614244   60628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:15:56.614369   60628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:56.614388   60628 kubeadm.go:322] 
	I1212 21:15:56.614507   60628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614653   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:56.614682   60628 kubeadm.go:322] 	--control-plane 
	I1212 21:15:56.614689   60628 kubeadm.go:322] 
	I1212 21:15:56.614787   60628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:56.614797   60628 kubeadm.go:322] 
	I1212 21:15:56.614865   60628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614993   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:56.616155   60628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:56.616184   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:15:56.616197   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:56.618787   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:53.240376   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.738865   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:56.620193   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:56.653642   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:56.701431   60628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:56.701520   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.701521   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=no-preload-343495 minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.765645   60628 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:57.021925   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.162627   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.772366   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.239852   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.239881   60948 pod_ready.go:81] duration metric: took 10.584655594s waiting for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.239895   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245919   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.245943   60948 pod_ready.go:81] duration metric: took 6.039649ms waiting for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245955   60948 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251905   60948 pod_ready.go:92] pod "kube-proxy-vzqkz" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.251933   60948 pod_ready.go:81] duration metric: took 5.969732ms waiting for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251943   60948 pod_ready.go:38] duration metric: took 10.623613273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:57.251963   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:15:57.252021   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:15:57.271808   60948 api_server.go:72] duration metric: took 10.891018678s to wait for apiserver process to appear ...
	I1212 21:15:57.271834   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:15:57.271853   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:15:57.279544   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:15:57.280373   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:15:57.280393   60948 api_server.go:131] duration metric: took 8.55283ms to wait for apiserver health ...
	I1212 21:15:57.280401   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:57.284489   60948 system_pods.go:59] 5 kube-system pods found
	I1212 21:15:57.284516   60948 system_pods.go:61] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.284520   60948 system_pods.go:61] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.284525   60948 system_pods.go:61] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.284531   60948 system_pods.go:61] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.284535   60948 system_pods.go:61] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.284542   60948 system_pods.go:74] duration metric: took 4.136571ms to wait for pod list to return data ...
	I1212 21:15:57.284549   60948 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:57.288616   60948 default_sa.go:45] found service account: "default"
	I1212 21:15:57.288643   60948 default_sa.go:55] duration metric: took 4.087698ms for default service account to be created ...
	I1212 21:15:57.288653   60948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:57.292785   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.292807   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.292812   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.292816   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.292822   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.292827   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.292842   60948 retry.go:31] will retry after 207.544988ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.505885   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.505911   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.505917   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.505921   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.505928   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.505932   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.505949   60948 retry.go:31] will retry after 367.076908ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.878466   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.878501   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.878509   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.878514   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.878520   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.878527   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.878547   60948 retry.go:31] will retry after 381.308829ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.264211   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.264237   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.264243   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.264247   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.264256   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.264262   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.264290   60948 retry.go:31] will retry after 366.461937ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.638206   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.638229   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.638234   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.638238   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.638245   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.638249   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.638276   60948 retry.go:31] will retry after 512.413163ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.156233   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.156263   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.156268   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.156272   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.156279   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.156284   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.156301   60948 retry.go:31] will retry after 775.973999ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.937928   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.937958   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.937966   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.937973   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.937983   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.937990   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.938009   60948 retry.go:31] will retry after 831.74396ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:00.775403   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:00.775427   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:00.775432   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:00.775436   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:00.775442   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:00.775447   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:00.775461   60948 retry.go:31] will retry after 1.069326929s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:01.849879   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:01.849906   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:01.849911   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:01.849915   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:01.849922   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:01.849927   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:01.849944   60948 retry.go:31] will retry after 1.540430535s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.271568   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:58.772443   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.271781   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.771732   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.272235   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.771891   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.271870   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.772445   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.271997   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.772496   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.395395   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:03.395421   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:03.395427   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:03.395431   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:03.395437   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:03.395442   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:03.395458   60948 retry.go:31] will retry after 2.25868002s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:05.661953   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:05.661988   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:05.661997   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:05.662005   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:05.662016   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:05.662026   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:05.662047   60948 retry.go:31] will retry after 2.893719866s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:03.272067   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.771992   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.272187   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.772518   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.272480   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.772460   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.272463   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.772291   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.271662   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.772063   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.272491   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.414409   60628 kubeadm.go:1088] duration metric: took 11.712956328s to wait for elevateKubeSystemPrivileges.
	I1212 21:16:08.414452   60628 kubeadm.go:406] StartCluster complete in 5m10.714058162s
	I1212 21:16:08.414480   60628 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.414582   60628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:16:08.417772   60628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.418132   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:16:08.418167   60628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:16:08.418267   60628 addons.go:69] Setting storage-provisioner=true in profile "no-preload-343495"
	I1212 21:16:08.418281   60628 addons.go:69] Setting default-storageclass=true in profile "no-preload-343495"
	I1212 21:16:08.418289   60628 addons.go:231] Setting addon storage-provisioner=true in "no-preload-343495"
	W1212 21:16:08.418297   60628 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:16:08.418301   60628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-343495"
	I1212 21:16:08.418310   60628 addons.go:69] Setting metrics-server=true in profile "no-preload-343495"
	I1212 21:16:08.418344   60628 addons.go:231] Setting addon metrics-server=true in "no-preload-343495"
	I1212 21:16:08.418349   60628 host.go:66] Checking if "no-preload-343495" exists ...
	W1212 21:16:08.418353   60628 addons.go:240] addon metrics-server should already be in state true
	I1212 21:16:08.418367   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:16:08.418401   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418810   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418850   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.437816   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I1212 21:16:08.438320   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.438921   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.438945   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.439225   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I1212 21:16:08.439418   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.439740   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.439809   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1212 21:16:08.440064   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.440092   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.440471   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.440491   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.440499   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.440887   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.440978   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.441002   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.441399   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.441442   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.441724   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.441960   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.446221   60628 addons.go:231] Setting addon default-storageclass=true in "no-preload-343495"
	W1212 21:16:08.446247   60628 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:16:08.446276   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.446655   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.446690   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.456479   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1212 21:16:08.456883   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.457330   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.457343   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.457784   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.457958   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.459741   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.461624   60628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:16:08.462951   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:16:08.462963   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:16:08.462978   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.462595   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1212 21:16:08.463831   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.464424   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.464443   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.465295   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.465627   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.467919   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468652   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.468681   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468905   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.469083   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.469197   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.469296   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.472614   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.474536   60628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:16:08.475957   60628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.475976   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:16:08.475995   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.476821   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I1212 21:16:08.477241   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.477772   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.477796   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.478322   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.479408   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.479457   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.479725   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480262   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.480285   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480565   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.480760   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.480909   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.481087   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.496182   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1212 21:16:08.496703   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.497250   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.497275   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.497705   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.497959   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.499696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.500049   60628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.500071   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:16:08.500098   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.503216   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503689   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.503717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503979   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.504187   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.504348   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.504521   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.519292   60628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-343495" context rescaled to 1 replicas
	I1212 21:16:08.519324   60628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:16:08.521243   60628 out.go:177] * Verifying Kubernetes components...
	I1212 21:16:08.522602   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:08.637693   60628 node_ready.go:35] waiting up to 6m0s for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.638072   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:16:08.640594   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:16:08.640620   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:16:08.645008   60628 node_ready.go:49] node "no-preload-343495" has status "Ready":"True"
	I1212 21:16:08.645041   60628 node_ready.go:38] duration metric: took 7.313798ms waiting for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.645056   60628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.650650   60628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658528   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.658556   60628 pod_ready.go:81] duration metric: took 7.881265ms waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658569   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682938   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.682962   60628 pod_ready.go:81] duration metric: took 24.384424ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682975   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.683220   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.688105   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:16:08.688131   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:16:08.695007   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.695034   60628 pod_ready.go:81] duration metric: took 12.050101ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.695046   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701206   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.701230   60628 pod_ready.go:81] duration metric: took 6.174333ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701240   60628 pod_ready.go:38] duration metric: took 56.165354ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.701262   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:16:08.701321   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:16:08.744650   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.758415   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:08.758444   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:16:08.841030   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:09.387385   60628 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 21:16:10.224475   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541186317s)
	I1212 21:16:10.224515   60628 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.523170366s)
	I1212 21:16:10.224548   60628 api_server.go:72] duration metric: took 1.705201863s to wait for apiserver process to appear ...
	I1212 21:16:10.224561   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:16:10.224571   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.479890747s)
	I1212 21:16:10.224606   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224579   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:16:10.224621   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.224522   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224686   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225001   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225050   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225065   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225074   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225011   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225019   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225020   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225115   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225130   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225140   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225347   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225358   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225507   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225572   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225600   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.233359   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:16:10.237567   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:16:10.237593   60628 api_server.go:131] duration metric: took 13.024501ms to wait for apiserver health ...
	I1212 21:16:10.237602   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:16:10.268851   60628 system_pods.go:59] 9 kube-system pods found
	I1212 21:16:10.268891   60628 system_pods.go:61] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268903   60628 system_pods.go:61] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268912   60628 system_pods.go:61] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.268920   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.268927   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.268936   60628 system_pods.go:61] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.268943   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.268953   60628 system_pods.go:61] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.268963   60628 system_pods.go:61] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending
	I1212 21:16:10.268971   60628 system_pods.go:74] duration metric: took 31.361836ms to wait for pod list to return data ...
	I1212 21:16:10.268987   60628 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:16:10.270947   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.270971   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.271270   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.271290   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.280134   60628 default_sa.go:45] found service account: "default"
	I1212 21:16:10.280159   60628 default_sa.go:55] duration metric: took 11.163534ms for default service account to be created ...
	I1212 21:16:10.280169   60628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:16:10.314822   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.314864   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314873   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314879   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.314886   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.314893   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.314903   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.314912   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.314923   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.314937   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.314957   60628 retry.go:31] will retry after 284.074155ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.328798   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.487713481s)
	I1212 21:16:10.328851   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.328866   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329251   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329276   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329276   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.329291   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.329304   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329540   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329556   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329566   60628 addons.go:467] Verifying addon metrics-server=true in "no-preload-343495"
	I1212 21:16:10.332474   60628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:16:08.563361   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:08.563393   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:08.563401   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:08.563408   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:08.563420   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:08.563427   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:08.563449   60948 retry.go:31] will retry after 2.871673075s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:11.441932   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:11.441970   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:11.441977   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:11.441983   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:11.441993   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.442003   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:11.442022   60948 retry.go:31] will retry after 3.977150615s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:10.333924   60628 addons.go:502] enable addons completed in 1.915760025s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:16:10.616684   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.616724   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616739   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616748   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.616757   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.616764   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.616775   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.616785   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.616795   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.616807   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.616825   60628 retry.go:31] will retry after 291.662068ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.919064   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.919104   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919114   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919125   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.919135   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.919142   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.919152   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.919160   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.919211   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.919229   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.919259   60628 retry.go:31] will retry after 381.992278ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.312083   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.312115   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.312121   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.312128   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.312137   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.312146   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.312152   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.312162   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.312170   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.312189   60628 retry.go:31] will retry after 495.705235ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.820167   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.820200   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.820205   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.820212   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.820217   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.820222   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.820226   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.820232   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.820237   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.820254   60628 retry.go:31] will retry after 635.810888ms: missing components: kube-dns, kube-proxy
	I1212 21:16:12.464096   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:12.464139   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Running
	I1212 21:16:12.464145   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:12.464149   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:12.464154   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:12.464158   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Running
	I1212 21:16:12.464162   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:12.464168   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:12.464176   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Running
	I1212 21:16:12.464185   60628 system_pods.go:126] duration metric: took 2.184010512s to wait for k8s-apps to be running ...
	I1212 21:16:12.464192   60628 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:12.464272   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:12.480090   60628 system_svc.go:56] duration metric: took 15.887114ms WaitForService to wait for kubelet.
	I1212 21:16:12.480124   60628 kubeadm.go:581] duration metric: took 3.960778694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:12.480163   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:12.483564   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:12.483589   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:12.483601   60628 node_conditions.go:105] duration metric: took 3.433071ms to run NodePressure ...
	I1212 21:16:12.483612   60628 start.go:228] waiting for startup goroutines ...
	I1212 21:16:12.483617   60628 start.go:233] waiting for cluster config update ...
	I1212 21:16:12.483626   60628 start.go:242] writing updated cluster config ...
	I1212 21:16:12.483887   60628 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:12.534680   60628 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 21:16:12.536622   60628 out.go:177] * Done! kubectl is now configured to use "no-preload-343495" cluster and "default" namespace by default
	I1212 21:16:15.424662   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:15.424691   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:15.424697   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:15.424701   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:15.424707   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:15.424712   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:15.424728   60948 retry.go:31] will retry after 4.920488737s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:20.351078   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:20.351107   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:20.351112   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:20.351116   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:20.351122   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:20.351127   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:20.351143   60948 retry.go:31] will retry after 5.718245097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:26.077073   60948 system_pods.go:86] 6 kube-system pods found
	I1212 21:16:26.077097   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:26.077103   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:26.077107   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Pending
	I1212 21:16:26.077111   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:26.077117   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:26.077122   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:26.077139   60948 retry.go:31] will retry after 8.251519223s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:34.334757   60948 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:34.334782   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:34.334787   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:34.334791   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:34.334796   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Pending
	I1212 21:16:34.334799   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:34.334804   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:34.334811   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:34.334815   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:34.334830   60948 retry.go:31] will retry after 8.584990669s: missing components: kube-apiserver, kube-scheduler
	I1212 21:16:42.927591   60948 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:42.927618   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:42.927624   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:42.927628   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:42.927632   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Running
	I1212 21:16:42.927637   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:42.927642   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:42.927647   60948 system_pods.go:89] "kube-scheduler-old-k8s-version-372099" [0e3e4e58-289f-47f1-999b-8fd87b90558a] Running
	I1212 21:16:42.927653   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:42.927658   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:42.927667   60948 system_pods.go:126] duration metric: took 45.639007967s to wait for k8s-apps to be running ...
	I1212 21:16:42.927673   60948 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:42.927715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:42.948680   60948 system_svc.go:56] duration metric: took 20.9943ms WaitForService to wait for kubelet.
	I1212 21:16:42.948711   60948 kubeadm.go:581] duration metric: took 56.56793182s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:42.948735   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:42.952462   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:42.952493   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:42.952505   60948 node_conditions.go:105] duration metric: took 3.763543ms to run NodePressure ...
	I1212 21:16:42.952518   60948 start.go:228] waiting for startup goroutines ...
	I1212 21:16:42.952527   60948 start.go:233] waiting for cluster config update ...
	I1212 21:16:42.952541   60948 start.go:242] writing updated cluster config ...
	I1212 21:16:42.952847   60948 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:43.001964   60948 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 21:16:43.003962   60948 out.go:177] 
	W1212 21:16:43.005327   60948 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 21:16:43.006827   60948 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 21:16:43.008259   60948 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-372099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:10:27 UTC, ends at Tue 2023-12-12 21:25:14 UTC. --
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.249537243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416314249522432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=526451dc-7c65-469f-a738-82c0269e9bc1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.250417324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0f897697-fa4f-4360-af8f-6ef1db8439e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.250487906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0f897697-fa4f-4360-af8f-6ef1db8439e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.250761750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0f897697-fa4f-4360-af8f-6ef1db8439e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.297052256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ae244cfe-89ec-40d1-9a24-ade7c7d79f61 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.297205934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ae244cfe-89ec-40d1-9a24-ade7c7d79f61 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.299218778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f9494c28-c30b-4271-9aec-9cccdf5f09e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.299558619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416314299542827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f9494c28-c30b-4271-9aec-9cccdf5f09e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.300506257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=38bb0217-03eb-4a83-8401-603a56a896c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.300583518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=38bb0217-03eb-4a83-8401-603a56a896c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.300749340Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=38bb0217-03eb-4a83-8401-603a56a896c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.345735813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d2c6d364-6ae4-4112-9fcc-8d97dd8c1902 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.345828132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d2c6d364-6ae4-4112-9fcc-8d97dd8c1902 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.347448979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=70f277da-b54a-4990-a5bc-14a5dc6c11e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.347784889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416314347771477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=70f277da-b54a-4990-a5bc-14a5dc6c11e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.348657418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f2393fd-b873-4862-ba40-8b59875b5ec9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.348738128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f2393fd-b873-4862-ba40-8b59875b5ec9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.348956036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f2393fd-b873-4862-ba40-8b59875b5ec9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.389034658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e5f27a6b-b9f4-4c63-a1a7-319518c910e3 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.389221942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e5f27a6b-b9f4-4c63-a1a7-319518c910e3 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.391240211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1eab851d-6a5b-49eb-ba81-e22eed085a5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.391550621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416314391536523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=1eab851d-6a5b-49eb-ba81-e22eed085a5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.392198367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=72a3c849-1f36-47ee-b04c-09f754eb85c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.392248757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72a3c849-1f36-47ee-b04c-09f754eb85c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:14 no-preload-343495 crio[714]: time="2023-12-12 21:25:14.392402100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72a3c849-1f36-47ee-b04c-09f754eb85c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	410771844ae4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ba06521563fe8       storage-provisioner
	dce7bff2c61d6       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   7aeada5e57207       kube-proxy-glrvd
	be26409f1ba8e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   efbd172f52958       coredns-76f75df574-466sr
	787a1144b71c5       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   b2df9f9cb1384       etcd-no-preload-343495
	d26db7fee6c9e       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   4a0d87886d52a       kube-scheduler-no-preload-343495
	f9708b2bba83f       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   a16294de5c2dd       kube-apiserver-no-preload-343495
	ae2f388693d39       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   24ad9fecd9244       kube-controller-manager-no-preload-343495
	
	
	==> coredns [be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60825 - 21233 "HINFO IN 1011411155478666539.2533239205206563428. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010454599s
	
	
	==> describe nodes <==
	Name:               no-preload-343495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-343495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=no-preload-343495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-343495
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 21:25:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:21:22 +0000   Tue, 12 Dec 2023 21:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:21:22 +0000   Tue, 12 Dec 2023 21:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:21:22 +0000   Tue, 12 Dec 2023 21:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:21:22 +0000   Tue, 12 Dec 2023 21:15:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.176
	  Hostname:    no-preload-343495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9916e37b2280452399561c1888073016
	  System UUID:                9916e37b-2280-4523-9956-1c1888073016
	  Boot ID:                    78a30efc-5e15-4263-ba93-714a7384fb57
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-466sr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-343495                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-343495             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-343495    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-glrvd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-no-preload-343495             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-57f55c9bc5-xc79n              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node no-preload-343495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node no-preload-343495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node no-preload-343495 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-343495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-343495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-343495 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m18s                  kubelet          Node no-preload-343495 status is now: NodeNotReady
	  Normal  NodeReady                9m18s                  kubelet          Node no-preload-343495 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m6s                   node-controller  Node no-preload-343495 event: Registered Node no-preload-343495 in Controller
	
	
	==> dmesg <==
	[Dec12 21:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070530] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.126680] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.511665] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142408] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.557951] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.252734] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.117782] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.149017] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.103233] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.225197] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Dec12 21:11] systemd-fstab-generator[1330]: Ignoring "noauto" for root device
	[ +20.711752] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 21:15] systemd-fstab-generator[3956]: Ignoring "noauto" for root device
	[  +9.846870] systemd-fstab-generator[4281]: Ignoring "noauto" for root device
	[Dec12 21:16] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd] <==
	{"level":"info","ts":"2023-12-12T21:15:51.075346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a switched to configuration voters=(357180144389535578)"}
	{"level":"info","ts":"2023-12-12T21:15:51.080418Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"310df9cc729b3e75","local-member-id":"4f4f572eb29375a","added-peer-id":"4f4f572eb29375a","added-peer-peer-urls":["https://192.168.61.176:2380"]}
	{"level":"info","ts":"2023-12-12T21:15:51.105002Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T21:15:51.10761Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4f4f572eb29375a","initial-advertise-peer-urls":["https://192.168.61.176:2380"],"listen-peer-urls":["https://192.168.61.176:2380"],"advertise-client-urls":["https://192.168.61.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T21:15:51.107348Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.176:2380"}
	{"level":"info","ts":"2023-12-12T21:15:51.110688Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T21:15:51.1108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.176:2380"}
	{"level":"info","ts":"2023-12-12T21:15:51.714196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T21:15:51.714275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T21:15:51.714318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a received MsgPreVoteResp from 4f4f572eb29375a at term 1"}
	{"level":"info","ts":"2023-12-12T21:15:51.714335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.714343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a received MsgVoteResp from 4f4f572eb29375a at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.714353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became leader at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.714363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4f4f572eb29375a elected leader 4f4f572eb29375a at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.715936Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.7171Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4f4f572eb29375a","local-member-attributes":"{Name:no-preload-343495 ClientURLs:[https://192.168.61.176:2379]}","request-path":"/0/members/4f4f572eb29375a/attributes","cluster-id":"310df9cc729b3e75","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T21:15:51.717223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T21:15:51.717765Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"310df9cc729b3e75","local-member-id":"4f4f572eb29375a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.717856Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.717889Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.7179Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T21:15:51.718932Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T21:15:51.718991Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T21:15:51.719635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.176:2379"}
	{"level":"info","ts":"2023-12-12T21:15:51.720606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:25:14 up 14 min,  0 users,  load average: 0.59, 0.33, 0.21
	Linux no-preload-343495 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06] <==
	I1212 21:19:10.824360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:20:53.257526       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:20:53.257870       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1212 21:20:54.258019       1 handler_proxy.go:93] no RequestInfo found in the context
	W1212 21:20:54.258257       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:20:54.258456       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:20:54.258494       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1212 21:20:54.258342       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:20:54.260648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:21:54.259641       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:21:54.259945       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:21:54.259979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:21:54.261281       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:21:54.261360       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:21:54.261389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:23:54.260741       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:23:54.260878       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:23:54.260891       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:23:54.262208       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:23:54.262235       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:23:54.262245       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b] <==
	I1212 21:19:38.960033       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:20:08.593099       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:20:08.968849       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:20:38.599568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:20:38.978392       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:21:08.606212       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:21:08.988281       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:21:38.612673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:21:38.998294       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:22:02.796075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="299.494µs"
	E1212 21:22:08.620924       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:22:09.008093       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:22:15.793563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="191.683µs"
	E1212 21:22:38.627448       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:22:39.017998       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:23:08.632818       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:23:09.027683       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:23:38.639416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:23:39.036250       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:24:08.646896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:24:09.045416       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:24:38.653786       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:24:39.055640       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:25:08.664478       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:25:09.064342       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063] <==
	I1212 21:16:11.743894       1 server_others.go:72] "Using iptables proxy"
	I1212 21:16:11.760586       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.176"]
	I1212 21:16:11.872304       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 21:16:11.872373       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 21:16:11.872393       1 server_others.go:168] "Using iptables Proxier"
	I1212 21:16:11.875543       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 21:16:11.875752       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1212 21:16:11.875792       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:16:11.877427       1 config.go:188] "Starting service config controller"
	I1212 21:16:11.877474       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 21:16:11.877500       1 config.go:97] "Starting endpoint slice config controller"
	I1212 21:16:11.877504       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 21:16:11.880193       1 config.go:315] "Starting node config controller"
	I1212 21:16:11.880230       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 21:16:11.978511       1 shared_informer.go:318] Caches are synced for service config
	I1212 21:16:11.978793       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 21:16:11.981808       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a] <==
	E1212 21:15:53.273812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 21:15:53.273849       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:15:53.273889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 21:15:53.273945       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:53.273954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 21:15:53.274018       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:53.274054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:53.275603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.109014       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 21:15:54.109072       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 21:15:54.233971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:54.234036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.351940       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:15:54.352001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 21:15:54.427981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:54.428041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.450002       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:15:54.450078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 21:15:54.588980       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:15:54.589036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 21:15:54.645235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:54.645305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.647723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:54.647794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1212 21:15:57.064415       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:10:27 UTC, ends at Tue 2023-12-12 21:25:14 UTC. --
	Dec 12 21:22:29 no-preload-343495 kubelet[4288]: E1212 21:22:29.776342    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:22:44 no-preload-343495 kubelet[4288]: E1212 21:22:44.776467    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:22:55 no-preload-343495 kubelet[4288]: E1212 21:22:55.776714    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:22:56 no-preload-343495 kubelet[4288]: E1212 21:22:56.893314    4288 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:22:56 no-preload-343495 kubelet[4288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:22:56 no-preload-343495 kubelet[4288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:22:56 no-preload-343495 kubelet[4288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:23:09 no-preload-343495 kubelet[4288]: E1212 21:23:09.776889    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:23:24 no-preload-343495 kubelet[4288]: E1212 21:23:24.776838    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:23:39 no-preload-343495 kubelet[4288]: E1212 21:23:39.776455    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:23:54 no-preload-343495 kubelet[4288]: E1212 21:23:54.776833    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:23:56 no-preload-343495 kubelet[4288]: E1212 21:23:56.890327    4288 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:23:56 no-preload-343495 kubelet[4288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:23:56 no-preload-343495 kubelet[4288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:23:56 no-preload-343495 kubelet[4288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:24:07 no-preload-343495 kubelet[4288]: E1212 21:24:07.775903    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:24:19 no-preload-343495 kubelet[4288]: E1212 21:24:19.775753    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:24:32 no-preload-343495 kubelet[4288]: E1212 21:24:32.776503    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:24:44 no-preload-343495 kubelet[4288]: E1212 21:24:44.776267    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:24:55 no-preload-343495 kubelet[4288]: E1212 21:24:55.776342    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:24:56 no-preload-343495 kubelet[4288]: E1212 21:24:56.892920    4288 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:24:56 no-preload-343495 kubelet[4288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:24:56 no-preload-343495 kubelet[4288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:24:56 no-preload-343495 kubelet[4288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:25:10 no-preload-343495 kubelet[4288]: E1212 21:25:10.777485    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	
	
	==> storage-provisioner [410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4] <==
	I1212 21:16:11.941747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:16:11.993789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:16:11.993927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:16:12.014634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:16:12.015719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a32f3864-f015-4e37-be30-850cb267aa84", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-343495_4ec44916-4937-426c-a8cb-8e309ece4040 became leader
	I1212 21:16:12.015985       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-343495_4ec44916-4937-426c-a8cb-8e309ece4040!
	I1212 21:16:12.116319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-343495_4ec44916-4937-426c-a8cb-8e309ece4040!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-343495 -n no-preload-343495
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-343495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xc79n
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-343495 describe pod metrics-server-57f55c9bc5-xc79n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-343495 describe pod metrics-server-57f55c9bc5-xc79n: exit status 1 (78.755284ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xc79n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-343495 describe pod metrics-server-57f55c9bc5-xc79n: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:16:45.697730   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:16:48.880519   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:17:22.810265   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:18:08.741780   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:18:12.358273   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:18:45.800661   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:18:45.855208   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:18:56.433511   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 21:19:35.402602   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:19:39.385063   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:19:42.874494   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:20:08.846169   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:20:19.481715   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 21:20:20.138623   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:21:05.921182   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:21:06.483511   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:21:43.186711   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:21:45.698177   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:21:48.881427   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:22:22.809966   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:22:29.529474   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:23:11.931142   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:23:12.358342   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-372099 -n old-k8s-version-372099
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:25:43.599638994 +0000 UTC m=+5344.819811569
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-372099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-372099 logs -n 25: (1.731731857s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-690675 sudo cat                              | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo find                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo crio                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-690675                                       | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:06:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:06:02.112042   61298 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:06:02.112158   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112166   61298 out.go:309] Setting ErrFile to fd 2...
	I1212 21:06:02.112171   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112352   61298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:06:02.112888   61298 out.go:303] Setting JSON to false
	I1212 21:06:02.113799   61298 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6516,"bootTime":1702408646,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:06:02.113858   61298 start.go:138] virtualization: kvm guest
	I1212 21:06:02.116152   61298 out.go:177] * [default-k8s-diff-port-171828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:06:02.118325   61298 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:06:02.118373   61298 notify.go:220] Checking for updates...
	I1212 21:06:02.120036   61298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:06:02.121697   61298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:06:02.123350   61298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:06:02.124958   61298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:06:02.126355   61298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:06:02.128221   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:06:02.128652   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.128709   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.143368   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I1212 21:06:02.143740   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.144319   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.144342   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.144674   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.144877   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.145143   61298 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:06:02.145473   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.145519   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.160165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1212 21:06:02.160611   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.161098   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.161129   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.161410   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.161605   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.198703   61298 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:06:02.199992   61298 start.go:298] selected driver: kvm2
	I1212 21:06:02.200011   61298 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.200131   61298 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:06:02.200848   61298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.200920   61298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:06:02.215947   61298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:06:02.216333   61298 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:02.216397   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:06:02.216410   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:06:02.216420   61298 start_flags.go:323] config:
	{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-17182
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.216597   61298 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.218773   61298 out.go:177] * Starting control plane node default-k8s-diff-port-171828 in cluster default-k8s-diff-port-171828
	I1212 21:05:59.427580   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:02.220182   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:06:02.220241   61298 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 21:06:02.220256   61298 cache.go:56] Caching tarball of preloaded images
	I1212 21:06:02.220379   61298 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:06:02.220393   61298 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 21:06:02.220514   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:06:02.220739   61298 start.go:365] acquiring machines lock for default-k8s-diff-port-171828: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:06:05.507538   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:08.579605   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:14.659535   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:17.731542   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:23.811575   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:26.883541   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:32.963600   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:36.035521   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:42.115475   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:45.187562   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:51.267528   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:54.339532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:00.419548   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:03.491553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:09.571514   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:12.643531   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:18.723534   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:21.795549   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:27.875554   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:30.947574   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:37.027523   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:40.099490   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:46.179518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:49.251577   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:55.331532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:58.403520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:04.483547   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:07.555546   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:13.635553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:16.707518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:22.787551   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:25.859539   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:31.939511   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:35.011564   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:41.091518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:44.163443   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:50.243526   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:53.315520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:59.395550   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:02.467533   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:05.471384   60833 start.go:369] acquired machines lock for "embed-certs-831188" in 4m18.011296189s
	I1212 21:09:05.471446   60833 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:05.471453   60833 fix.go:54] fixHost starting: 
	I1212 21:09:05.471803   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:05.471837   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:05.486451   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1212 21:09:05.486900   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:05.487381   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:05.487404   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:05.487715   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:05.487879   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:05.488020   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:05.489670   60833 fix.go:102] recreateIfNeeded on embed-certs-831188: state=Stopped err=<nil>
	I1212 21:09:05.489704   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	W1212 21:09:05.489876   60833 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:05.492059   60833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831188" ...
	I1212 21:09:05.493752   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Start
	I1212 21:09:05.493959   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring networks are active...
	I1212 21:09:05.494984   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network default is active
	I1212 21:09:05.495423   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network mk-embed-certs-831188 is active
	I1212 21:09:05.495761   60833 main.go:141] libmachine: (embed-certs-831188) Getting domain xml...
	I1212 21:09:05.496421   60833 main.go:141] libmachine: (embed-certs-831188) Creating domain...
	I1212 21:09:06.732388   60833 main.go:141] libmachine: (embed-certs-831188) Waiting to get IP...
	I1212 21:09:06.733338   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:06.733708   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:06.733785   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:06.733676   61768 retry.go:31] will retry after 284.906493ms: waiting for machine to come up
	I1212 21:09:07.020284   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.020718   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.020745   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.020671   61768 retry.go:31] will retry after 293.274895ms: waiting for machine to come up
	I1212 21:09:07.315313   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.315686   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.315712   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.315641   61768 retry.go:31] will retry after 361.328832ms: waiting for machine to come up
	I1212 21:09:05.469256   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:05.469293   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:09:05.471233   60628 machine.go:91] provisioned docker machine in 4m37.408714984s
	I1212 21:09:05.471294   60628 fix.go:56] fixHost completed within 4m37.431179626s
	I1212 21:09:05.471299   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 4m37.431203273s
	W1212 21:09:05.471318   60628 start.go:694] error starting host: provision: host is not running
	W1212 21:09:05.471416   60628 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 21:09:05.471424   60628 start.go:709] Will try again in 5 seconds ...
	I1212 21:09:07.678255   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.678636   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.678700   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.678599   61768 retry.go:31] will retry after 604.479659ms: waiting for machine to come up
	I1212 21:09:08.284350   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:08.284754   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:08.284779   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:08.284701   61768 retry.go:31] will retry after 731.323448ms: waiting for machine to come up
	I1212 21:09:09.017564   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.018007   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.018040   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.017968   61768 retry.go:31] will retry after 734.083609ms: waiting for machine to come up
	I1212 21:09:09.753947   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.754423   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.754446   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.754362   61768 retry.go:31] will retry after 786.816799ms: waiting for machine to come up
	I1212 21:09:10.542771   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:10.543304   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:10.543341   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:10.543264   61768 retry.go:31] will retry after 1.40646031s: waiting for machine to come up
	I1212 21:09:11.951821   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:11.952180   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:11.952223   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:11.952135   61768 retry.go:31] will retry after 1.693488962s: waiting for machine to come up
	I1212 21:09:10.473087   60628 start.go:365] acquiring machines lock for no-preload-343495: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:09:13.646801   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:13.647256   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:13.647299   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:13.647180   61768 retry.go:31] will retry after 1.856056162s: waiting for machine to come up
	I1212 21:09:15.504815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:15.505228   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:15.505258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:15.505175   61768 retry.go:31] will retry after 2.008264333s: waiting for machine to come up
	I1212 21:09:17.516231   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:17.516653   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:17.516683   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:17.516604   61768 retry.go:31] will retry after 3.239343078s: waiting for machine to come up
	I1212 21:09:20.757258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:20.757696   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:20.757725   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:20.757654   61768 retry.go:31] will retry after 4.315081016s: waiting for machine to come up
	I1212 21:09:26.424166   60948 start.go:369] acquired machines lock for "old-k8s-version-372099" in 4m29.049387398s
	I1212 21:09:26.424241   60948 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:26.424254   60948 fix.go:54] fixHost starting: 
	I1212 21:09:26.424715   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:26.424763   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:26.444634   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I1212 21:09:26.445043   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:26.445520   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:09:26.445538   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:26.445863   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:26.446052   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:26.446192   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:09:26.447776   60948 fix.go:102] recreateIfNeeded on old-k8s-version-372099: state=Stopped err=<nil>
	I1212 21:09:26.447804   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	W1212 21:09:26.448015   60948 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:26.450126   60948 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-372099" ...
	I1212 21:09:26.451553   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Start
	I1212 21:09:26.451708   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring networks are active...
	I1212 21:09:26.452388   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network default is active
	I1212 21:09:26.452655   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network mk-old-k8s-version-372099 is active
	I1212 21:09:26.453124   60948 main.go:141] libmachine: (old-k8s-version-372099) Getting domain xml...
	I1212 21:09:26.453799   60948 main.go:141] libmachine: (old-k8s-version-372099) Creating domain...
	I1212 21:09:25.078112   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078553   60833 main.go:141] libmachine: (embed-certs-831188) Found IP for machine: 192.168.50.163
	I1212 21:09:25.078585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has current primary IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078596   60833 main.go:141] libmachine: (embed-certs-831188) Reserving static IP address...
	I1212 21:09:25.078997   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.079030   60833 main.go:141] libmachine: (embed-certs-831188) Reserved static IP address: 192.168.50.163
	I1212 21:09:25.079052   60833 main.go:141] libmachine: (embed-certs-831188) DBG | skip adding static IP to network mk-embed-certs-831188 - found existing host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"}
	I1212 21:09:25.079071   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Getting to WaitForSSH function...
	I1212 21:09:25.079085   60833 main.go:141] libmachine: (embed-certs-831188) Waiting for SSH to be available...
	I1212 21:09:25.080901   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081194   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.081242   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081366   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH client type: external
	I1212 21:09:25.081388   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa (-rw-------)
	I1212 21:09:25.081416   60833 main.go:141] libmachine: (embed-certs-831188) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:25.081426   60833 main.go:141] libmachine: (embed-certs-831188) DBG | About to run SSH command:
	I1212 21:09:25.081438   60833 main.go:141] libmachine: (embed-certs-831188) DBG | exit 0
	I1212 21:09:25.171277   60833 main.go:141] libmachine: (embed-certs-831188) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:25.171663   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetConfigRaw
	I1212 21:09:25.172345   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.174944   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175302   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.175333   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175553   60833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/config.json ...
	I1212 21:09:25.175828   60833 machine.go:88] provisioning docker machine ...
	I1212 21:09:25.175855   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:25.176065   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176212   60833 buildroot.go:166] provisioning hostname "embed-certs-831188"
	I1212 21:09:25.176233   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176371   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.178556   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178823   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.178850   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178957   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.179142   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179295   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179436   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.179558   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.179895   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.179910   60833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831188 && echo "embed-certs-831188" | sudo tee /etc/hostname
	I1212 21:09:25.312418   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831188
	
	I1212 21:09:25.312457   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.315156   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315529   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.315570   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315707   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.315895   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316053   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316211   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.316378   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.316840   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.316869   60833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831188' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831188/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831188' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:25.448302   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:25.448332   60833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:25.448353   60833 buildroot.go:174] setting up certificates
	I1212 21:09:25.448362   60833 provision.go:83] configureAuth start
	I1212 21:09:25.448369   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.448691   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.451262   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.451639   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451807   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.454144   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454434   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.454460   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454596   60833 provision.go:138] copyHostCerts
	I1212 21:09:25.454665   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:25.454689   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:25.454775   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:25.454928   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:25.454940   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:25.454984   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:25.455062   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:25.455073   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:25.455106   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:25.455171   60833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831188 san=[192.168.50.163 192.168.50.163 localhost 127.0.0.1 minikube embed-certs-831188]
	I1212 21:09:25.678855   60833 provision.go:172] copyRemoteCerts
	I1212 21:09:25.678942   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:25.678975   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.681866   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682221   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.682249   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682399   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.682590   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.682730   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.682856   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:25.773454   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:25.796334   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 21:09:25.818680   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:09:25.840234   60833 provision.go:86] duration metric: configureAuth took 391.845214ms
	I1212 21:09:25.840268   60833 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:25.840497   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:25.840643   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.842988   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843431   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.843482   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843586   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.843772   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.843946   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.844066   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.844227   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.844542   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.844563   60833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:26.167363   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:26.167388   60833 machine.go:91] provisioned docker machine in 991.541719ms
	I1212 21:09:26.167398   60833 start.go:300] post-start starting for "embed-certs-831188" (driver="kvm2")
	I1212 21:09:26.167408   60833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:26.167444   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.167739   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:26.167763   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.170188   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170569   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.170611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170712   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.170880   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.171049   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.171194   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.261249   60833 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:26.265429   60833 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:26.265451   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:26.265522   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:26.265602   60833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:26.265695   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:26.274054   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:26.297890   60833 start.go:303] post-start completed in 130.478946ms
	I1212 21:09:26.297915   60833 fix.go:56] fixHost completed within 20.826462284s
	I1212 21:09:26.297934   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.300585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.300934   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.300975   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.301144   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.301359   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301529   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301665   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.301797   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:26.302153   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:26.302164   60833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:26.423978   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415366.370228005
	
	I1212 21:09:26.424008   60833 fix.go:206] guest clock: 1702415366.370228005
	I1212 21:09:26.424019   60833 fix.go:219] Guest: 2023-12-12 21:09:26.370228005 +0000 UTC Remote: 2023-12-12 21:09:26.297918475 +0000 UTC m=+278.991313322 (delta=72.30953ms)
	I1212 21:09:26.424052   60833 fix.go:190] guest clock delta is within tolerance: 72.30953ms
	I1212 21:09:26.424061   60833 start.go:83] releasing machines lock for "embed-certs-831188", held for 20.952636536s
	I1212 21:09:26.424090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.424347   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:26.427068   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427479   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.427519   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427592   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428173   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428344   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428414   60833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:26.428470   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.428492   60833 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:26.428508   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.430943   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431251   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431371   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431393   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431548   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431631   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431654   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431776   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.431844   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431998   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.432040   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432285   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.432490   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.548980   60833 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:26.555211   60833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:26.707171   60833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:26.714564   60833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:26.714658   60833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:26.730858   60833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:26.730890   60833 start.go:475] detecting cgroup driver to use...
	I1212 21:09:26.730963   60833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:26.751316   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:26.766700   60833 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:26.766767   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:26.783157   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:26.799559   60833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:26.908659   60833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:27.029185   60833 docker.go:219] disabling docker service ...
	I1212 21:09:27.029245   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:27.042969   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:27.055477   60833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:27.174297   60833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:27.285338   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:27.299676   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:27.317832   60833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:09:27.317900   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.329270   60833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:27.329346   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.341201   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.353243   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.365796   60833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:27.377700   60833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:27.388796   60833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:27.388858   60833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:27.401983   60833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:27.411527   60833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:27.523326   60833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:27.702370   60833 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:27.702435   60833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:27.707537   60833 start.go:543] Will wait 60s for crictl version
	I1212 21:09:27.707619   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:09:27.711502   60833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:27.750808   60833 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:27.750912   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.799419   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.848900   60833 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:09:27.722142   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting to get IP...
	I1212 21:09:27.723300   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.723736   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.723806   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.723702   61894 retry.go:31] will retry after 267.755874ms: waiting for machine to come up
	I1212 21:09:27.993406   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.993917   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.993947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.993865   61894 retry.go:31] will retry after 314.872831ms: waiting for machine to come up
	I1212 21:09:28.310446   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.311022   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.311051   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.310971   61894 retry.go:31] will retry after 435.368111ms: waiting for machine to come up
	I1212 21:09:28.747774   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.748267   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.748299   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.748238   61894 retry.go:31] will retry after 521.305154ms: waiting for machine to come up
	I1212 21:09:29.270989   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.271519   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.271553   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.271446   61894 retry.go:31] will retry after 482.42376ms: waiting for machine to come up
	I1212 21:09:29.755222   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.755724   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.755755   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.755671   61894 retry.go:31] will retry after 676.918794ms: waiting for machine to come up
	I1212 21:09:30.434488   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:30.435072   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:30.435103   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:30.435025   61894 retry.go:31] will retry after 876.618903ms: waiting for machine to come up
	I1212 21:09:31.313270   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:31.313826   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:31.313857   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:31.313775   61894 retry.go:31] will retry after 1.03353638s: waiting for machine to come up
	I1212 21:09:27.850614   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:27.853633   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854033   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:27.854069   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854243   60833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:27.858626   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:27.871999   60833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:09:27.872058   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:27.920758   60833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:09:27.920832   60833 ssh_runner.go:195] Run: which lz4
	I1212 21:09:27.924857   60833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:27.929186   60833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:27.929220   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:09:29.834194   60833 crio.go:444] Took 1.909381 seconds to copy over tarball
	I1212 21:09:29.834285   60833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:32.348562   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:32.349019   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:32.349041   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:32.348978   61894 retry.go:31] will retry after 1.80085882s: waiting for machine to come up
	I1212 21:09:34.151943   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:34.152375   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:34.152416   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:34.152343   61894 retry.go:31] will retry after 2.08304575s: waiting for machine to come up
	I1212 21:09:36.238682   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:36.239115   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:36.239149   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:36.239074   61894 retry.go:31] will retry after 2.109809124s: waiting for machine to come up
	I1212 21:09:33.005355   60833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.171034001s)
	I1212 21:09:33.005386   60833 crio.go:451] Took 3.171167 seconds to extract the tarball
	I1212 21:09:33.005398   60833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:33.046773   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:33.101606   60833 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:09:33.101627   60833 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:09:33.101689   60833 ssh_runner.go:195] Run: crio config
	I1212 21:09:33.162553   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:33.162584   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:33.162608   60833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:33.162637   60833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831188 NodeName:embed-certs-831188 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:09:33.162806   60833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831188"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:33.162923   60833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-831188 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:33.162978   60833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:09:33.171937   60833 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:33.172013   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:33.180480   60833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 21:09:33.197675   60833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:33.214560   60833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 21:09:33.234926   60833 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:33.238913   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:33.255261   60833 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188 for IP: 192.168.50.163
	I1212 21:09:33.255320   60833 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:33.255462   60833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:33.255496   60833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:33.255561   60833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/client.key
	I1212 21:09:33.255641   60833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key.6a576ed8
	I1212 21:09:33.255686   60833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key
	I1212 21:09:33.255781   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:33.255807   60833 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:33.255814   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:33.255835   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:33.255864   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:33.255885   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:33.255931   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:33.256505   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:33.282336   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:33.307179   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:33.332468   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:09:33.357444   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:33.383372   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:33.409070   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:33.438164   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:33.467676   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:33.496645   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:33.523126   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:33.548366   60833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:33.567745   60833 ssh_runner.go:195] Run: openssl version
	I1212 21:09:33.573716   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:33.584221   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589689   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589767   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.595880   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:33.609574   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:33.623129   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629541   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629615   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.635862   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:33.646421   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:33.656686   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661397   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661473   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.667092   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:33.677905   60833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:33.682795   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:33.689346   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:33.695822   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:33.702368   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:33.708500   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:33.714793   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:33.721121   60833 kubeadm.go:404] StartCluster: {Name:embed-certs-831188 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:33.721252   60833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:33.721319   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:33.759428   60833 cri.go:89] found id: ""
	I1212 21:09:33.759502   60833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:33.769592   60833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:33.769617   60833 kubeadm.go:636] restartCluster start
	I1212 21:09:33.769712   60833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:33.779313   60833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.780838   60833 kubeconfig.go:92] found "embed-certs-831188" server: "https://192.168.50.163:8443"
	I1212 21:09:33.784096   60833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:33.793192   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.793314   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.805112   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.805139   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.805196   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.816975   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.317757   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.317858   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.329702   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.817167   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.817266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.828633   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.317136   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.317230   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.328803   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.818032   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.818121   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.829428   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.318141   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.318253   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.330749   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.817284   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.817367   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.828787   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:37.317183   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.317266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.334557   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.350131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:38.350522   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:38.350546   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:38.350484   61894 retry.go:31] will retry after 2.423656351s: waiting for machine to come up
	I1212 21:09:40.777036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:40.777455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:40.777489   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:40.777399   61894 retry.go:31] will retry after 3.275180742s: waiting for machine to come up
	I1212 21:09:37.817090   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.817219   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.833813   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.317328   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.317409   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.334684   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.817255   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.817353   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.831011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.317555   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.317648   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.330189   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.817759   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.817866   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.830611   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.317127   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.317198   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.329508   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.817580   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.817677   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.829289   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.317853   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.317928   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.331394   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.818013   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.818098   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.829011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:42.317526   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.317610   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.329211   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:44.056058   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:44.056558   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:44.056587   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:44.056517   61894 retry.go:31] will retry after 4.729711581s: waiting for machine to come up
	I1212 21:09:42.818081   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.818166   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.829930   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.317420   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:43.317526   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:43.328536   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.794084   60833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:09:43.794118   60833 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:09:43.794129   60833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:09:43.794192   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:43.842360   60833 cri.go:89] found id: ""
	I1212 21:09:43.842431   60833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:09:43.859189   60833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:09:43.869065   60833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:09:43.869135   60833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878614   60833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878644   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.011533   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.544591   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.757944   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.850440   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.942874   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:09:44.942967   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:44.954886   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.466556   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.966545   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.465991   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.966021   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.987348   60833 api_server.go:72] duration metric: took 2.04447632s to wait for apiserver process to appear ...
	I1212 21:09:46.987374   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:09:46.987388   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.987890   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:46.987926   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.988389   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:50.008527   61298 start.go:369] acquired machines lock for "default-k8s-diff-port-171828" in 3m47.787737833s
	I1212 21:09:50.008595   61298 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:50.008607   61298 fix.go:54] fixHost starting: 
	I1212 21:09:50.008999   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:50.009035   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:50.025692   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I1212 21:09:50.026047   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:50.026541   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:09:50.026563   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:50.026945   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:50.027160   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:09:50.027344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:09:50.029005   61298 fix.go:102] recreateIfNeeded on default-k8s-diff-port-171828: state=Stopped err=<nil>
	I1212 21:09:50.029031   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	W1212 21:09:50.029193   61298 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:50.031805   61298 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-171828" ...
	I1212 21:09:48.789770   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790158   60948 main.go:141] libmachine: (old-k8s-version-372099) Found IP for machine: 192.168.39.202
	I1212 21:09:48.790172   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserving static IP address...
	I1212 21:09:48.790195   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790655   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.790683   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserved static IP address: 192.168.39.202
	I1212 21:09:48.790701   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | skip adding static IP to network mk-old-k8s-version-372099 - found existing host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"}
	I1212 21:09:48.790719   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Getting to WaitForSSH function...
	I1212 21:09:48.790736   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting for SSH to be available...
	I1212 21:09:48.793069   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793392   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.793418   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793542   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH client type: external
	I1212 21:09:48.793582   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa (-rw-------)
	I1212 21:09:48.793610   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:48.793620   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | About to run SSH command:
	I1212 21:09:48.793629   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | exit 0
	I1212 21:09:48.883487   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:48.883885   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetConfigRaw
	I1212 21:09:48.884519   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:48.887128   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.887485   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887734   60948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/config.json ...
	I1212 21:09:48.887918   60948 machine.go:88] provisioning docker machine ...
	I1212 21:09:48.887936   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:48.888097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888225   60948 buildroot.go:166] provisioning hostname "old-k8s-version-372099"
	I1212 21:09:48.888238   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888378   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:48.890462   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890820   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.890847   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890982   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:48.891139   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891289   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:48.891597   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:48.891940   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:48.891955   60948 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-372099 && echo "old-k8s-version-372099" | sudo tee /etc/hostname
	I1212 21:09:49.012923   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-372099
	
	I1212 21:09:49.012954   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.015698   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016076   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.016117   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016245   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.016437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016583   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016710   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.016859   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.017308   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.017338   60948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-372099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-372099/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-372099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:49.144804   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:49.144842   60948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:49.144875   60948 buildroot.go:174] setting up certificates
	I1212 21:09:49.144885   60948 provision.go:83] configureAuth start
	I1212 21:09:49.144896   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:49.145181   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:49.147947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148294   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.148340   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148475   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.151218   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.151697   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.151760   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.152022   60948 provision.go:138] copyHostCerts
	I1212 21:09:49.152083   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:49.152102   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:49.152172   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:49.152299   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:49.152307   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:49.152335   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:49.152402   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:49.152407   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:49.152428   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:49.152485   60948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-372099 san=[192.168.39.202 192.168.39.202 localhost 127.0.0.1 minikube old-k8s-version-372099]
	I1212 21:09:49.298406   60948 provision.go:172] copyRemoteCerts
	I1212 21:09:49.298478   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:49.298508   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.301384   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301696   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.301729   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301948   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.302156   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.302320   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.302442   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.385046   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:09:49.409667   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:49.434002   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:09:49.458872   60948 provision.go:86] duration metric: configureAuth took 313.97378ms
	I1212 21:09:49.458907   60948 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:49.459075   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:09:49.459143   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.461794   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.462183   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462373   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.462574   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462730   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462857   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.463042   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.463594   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.463641   60948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:49.767652   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:49.767745   60948 machine.go:91] provisioned docker machine in 879.803204ms
	I1212 21:09:49.767772   60948 start.go:300] post-start starting for "old-k8s-version-372099" (driver="kvm2")
	I1212 21:09:49.767785   60948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:49.767812   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:49.768162   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:49.768191   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.770970   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771351   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.771388   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771595   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.771805   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.772009   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.772155   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.857053   60948 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:49.861510   60948 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:49.861535   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:49.861600   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:49.861672   60948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:49.861781   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:49.869967   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:49.892746   60948 start.go:303] post-start completed in 124.959403ms
	I1212 21:09:49.892768   60948 fix.go:56] fixHost completed within 23.468514721s
	I1212 21:09:49.892790   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.895273   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895618   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.895653   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895776   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.895951   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896269   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.896433   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.896887   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.896904   60948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:50.008384   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415389.953345991
	
	I1212 21:09:50.008407   60948 fix.go:206] guest clock: 1702415389.953345991
	I1212 21:09:50.008415   60948 fix.go:219] Guest: 2023-12-12 21:09:49.953345991 +0000 UTC Remote: 2023-12-12 21:09:49.89277138 +0000 UTC m=+292.853960893 (delta=60.574611ms)
	I1212 21:09:50.008441   60948 fix.go:190] guest clock delta is within tolerance: 60.574611ms
	I1212 21:09:50.008445   60948 start.go:83] releasing machines lock for "old-k8s-version-372099", held for 23.584233709s
	I1212 21:09:50.008469   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.008757   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:50.011577   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.011930   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.011958   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.012109   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012750   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012964   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.013059   60948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:50.013102   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.013195   60948 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:50.013222   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.016031   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016304   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016525   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016566   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016720   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.016815   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016855   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016883   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017008   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.017080   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017186   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017256   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.017357   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017520   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.125100   60948 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:50.132264   60948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:50.278965   60948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:50.286230   60948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:50.286308   60948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:50.301165   60948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:50.301192   60948 start.go:475] detecting cgroup driver to use...
	I1212 21:09:50.301256   60948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:50.318715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:50.331943   60948 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:50.332013   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:50.348872   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:50.366970   60948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:50.492936   60948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:50.620103   60948 docker.go:219] disabling docker service ...
	I1212 21:09:50.620185   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:50.632962   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:50.644797   60948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:50.759039   60948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:50.884352   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:50.896549   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:50.919987   60948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 21:09:50.920056   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.932147   60948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:50.932224   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.941195   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.951010   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.962752   60948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:50.975125   60948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:50.984906   60948 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:50.984971   60948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:50.999594   60948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:51.010344   60948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:51.114607   60948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:51.318020   60948 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:51.318108   60948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:51.325048   60948 start.go:543] Will wait 60s for crictl version
	I1212 21:09:51.325134   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:51.329905   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:51.377974   60948 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:51.378075   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.444024   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.512531   60948 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 21:09:51.514171   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:51.517083   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517636   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:51.517667   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517886   60948 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:51.522137   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:51.538124   60948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 21:09:51.538191   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:51.594603   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:51.594688   60948 ssh_runner.go:195] Run: which lz4
	I1212 21:09:51.599732   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:51.604811   60948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:51.604844   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 21:09:50.033553   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Start
	I1212 21:09:50.033768   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring networks are active...
	I1212 21:09:50.034638   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network default is active
	I1212 21:09:50.035192   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network mk-default-k8s-diff-port-171828 is active
	I1212 21:09:50.035630   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Getting domain xml...
	I1212 21:09:50.036380   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Creating domain...
	I1212 21:09:51.530274   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting to get IP...
	I1212 21:09:51.531329   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531766   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.531744   62039 retry.go:31] will retry after 271.90604ms: waiting for machine to come up
	I1212 21:09:51.805469   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806028   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806062   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.805967   62039 retry.go:31] will retry after 338.221769ms: waiting for machine to come up
	I1212 21:09:47.488610   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.543731   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.543786   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.543807   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.654915   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.654949   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.989408   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.996278   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:51.996337   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.488734   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.496289   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:52.496327   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.989065   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.997013   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:09:53.012736   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:09:53.012777   60833 api_server.go:131] duration metric: took 6.025395735s to wait for apiserver health ...
	I1212 21:09:53.012789   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:53.012806   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:53.014820   60833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:09:53.016797   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:09:53.047434   60833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:09:53.095811   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:09:53.115354   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:09:53.115441   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:09:53.115465   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:09:53.115504   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:09:53.115532   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:09:53.115551   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:09:53.115582   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:09:53.115607   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:09:53.115633   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:09:53.115643   60833 system_pods.go:74] duration metric: took 19.808922ms to wait for pod list to return data ...
	I1212 21:09:53.115655   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:09:53.127006   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:09:53.127044   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:09:53.127058   60833 node_conditions.go:105] duration metric: took 11.39604ms to run NodePressure ...
	I1212 21:09:53.127079   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:53.597509   60833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603447   60833 kubeadm.go:787] kubelet initialised
	I1212 21:09:53.603476   60833 kubeadm.go:788] duration metric: took 5.932359ms waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603486   60833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:53.616570   60833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.623514   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623547   60833 pod_ready.go:81] duration metric: took 6.940441ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.623560   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623570   60833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.631395   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631426   60833 pod_ready.go:81] duration metric: took 7.844548ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.631438   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631453   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.649647   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649681   60833 pod_ready.go:81] duration metric: took 18.215042ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.649693   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649702   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.662239   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662271   60833 pod_ready.go:81] duration metric: took 12.552977ms waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.662285   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662298   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.005841   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005879   60833 pod_ready.go:81] duration metric: took 343.569867ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.005892   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005908   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.403249   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403280   60833 pod_ready.go:81] duration metric: took 397.363687ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.403291   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403297   60833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.802330   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802367   60833 pod_ready.go:81] duration metric: took 399.057426ms waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.802380   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802390   60833 pod_ready.go:38] duration metric: took 1.198894195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:54.802413   60833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:09:54.822125   60833 ops.go:34] apiserver oom_adj: -16
	I1212 21:09:54.822154   60833 kubeadm.go:640] restartCluster took 21.052529291s
	I1212 21:09:54.822173   60833 kubeadm.go:406] StartCluster complete in 21.101061651s
	I1212 21:09:54.822194   60833 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.822273   60833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:09:54.825185   60833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.825490   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:09:54.825622   60833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:09:54.825714   60833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831188"
	I1212 21:09:54.825735   60833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-831188"
	W1212 21:09:54.825756   60833 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:09:54.825806   60833 addons.go:69] Setting metrics-server=true in profile "embed-certs-831188"
	I1212 21:09:54.825837   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.825849   60833 addons.go:231] Setting addon metrics-server=true in "embed-certs-831188"
	W1212 21:09:54.825863   60833 addons.go:240] addon metrics-server should already be in state true
	I1212 21:09:54.825969   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.826276   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826309   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826522   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826588   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826731   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:54.826767   60833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831188"
	I1212 21:09:54.826847   60833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831188"
	I1212 21:09:54.827349   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.827409   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.834506   60833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-831188" context rescaled to 1 replicas
	I1212 21:09:54.834614   60833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:09:54.837122   60833 out.go:177] * Verifying Kubernetes components...
	I1212 21:09:54.839094   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:09:54.846081   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I1212 21:09:54.846737   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847078   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I1212 21:09:54.847367   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.847387   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.847518   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847775   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848031   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.848053   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.848061   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.848355   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848912   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.848955   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.849635   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1212 21:09:54.849986   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.852255   60833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-831188"
	W1212 21:09:54.852279   60833 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:09:54.852306   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.852727   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.852758   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.853259   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.853289   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.853643   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.854187   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.854223   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.870249   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I1212 21:09:54.870805   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.871406   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.871430   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.871920   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.872090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.873692   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.876011   60833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:54.874681   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1212 21:09:54.877102   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1212 21:09:54.877666   60833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:54.877691   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:09:54.877710   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.877993   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878108   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878602   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878622   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.878738   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878754   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.879004   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.879362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.879426   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.880445   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.880486   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.881642   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.883715   60833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:09:54.885165   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:09:54.885184   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:09:54.885199   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.883021   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.883884   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.885257   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.885295   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.885442   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.885598   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.885727   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.893093   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893096   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.893152   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.893190   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.893534   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.893676   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.902833   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1212 21:09:54.903320   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.903867   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.903888   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.904337   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.904535   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.906183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.906443   60833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:54.906463   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:09:54.906484   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.909330   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.909914   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.909954   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.910136   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.910328   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.910492   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.910639   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:55.020642   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:55.123475   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:55.141398   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:09:55.141429   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:09:55.200799   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:09:55.200833   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:09:55.275142   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:55.275172   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:09:55.308985   60833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831188" to be "Ready" ...
	I1212 21:09:55.309133   60833 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:09:55.341251   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:56.829715   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706199185s)
	I1212 21:09:56.829768   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829780   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.829784   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.809111646s)
	I1212 21:09:56.829860   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829870   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830143   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.830166   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.830178   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.830188   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830267   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.831959   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832013   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.832048   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.831765   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831788   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831794   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.832139   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832236   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.833156   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.833196   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.843517   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.843542   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.843815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.843870   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.843880   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.023745   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.682445607s)
	I1212 21:09:57.023801   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.023815   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024252   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024263   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:57.024276   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024287   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.024303   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024676   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024691   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024706   60833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-831188"
	I1212 21:09:53.564404   60948 crio.go:444] Took 1.964711 seconds to copy over tarball
	I1212 21:09:53.564488   60948 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:57.052627   60948 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.488106402s)
	I1212 21:09:57.052657   60948 crio.go:451] Took 3.488218 seconds to extract the tarball
	I1212 21:09:57.052669   60948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:52.145724   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146453   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146484   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.146352   62039 retry.go:31] will retry after 482.98499ms: waiting for machine to come up
	I1212 21:09:52.630862   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631317   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631343   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.631232   62039 retry.go:31] will retry after 480.323704ms: waiting for machine to come up
	I1212 21:09:53.113661   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.114249   62039 retry.go:31] will retry after 649.543956ms: waiting for machine to come up
	I1212 21:09:53.765102   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765613   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765643   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.765558   62039 retry.go:31] will retry after 824.137815ms: waiting for machine to come up
	I1212 21:09:54.591782   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592356   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592391   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:54.592273   62039 retry.go:31] will retry after 874.563899ms: waiting for machine to come up
	I1212 21:09:55.468934   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469429   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469459   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:55.469393   62039 retry.go:31] will retry after 1.224276076s: waiting for machine to come up
	I1212 21:09:56.695111   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695604   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695637   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:56.695560   62039 retry.go:31] will retry after 1.207984075s: waiting for machine to come up
	I1212 21:09:57.157310   60833 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:09:57.322702   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:57.093318   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:57.723104   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:57.723132   60948 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:09:57.723259   60948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.723342   60948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.723442   60948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 21:09:57.723302   60948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724835   60948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 21:09:57.724864   60948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.724861   60948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.724836   60948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724853   60948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.724842   60948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.724847   60948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.724893   60948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.918047   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.920893   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.927072   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.928080   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.931259   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 21:09:57.932017   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.939580   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.990594   60948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 21:09:57.990667   60948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.990724   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.059759   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:58.095401   60948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 21:09:58.095451   60948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.095504   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138192   60948 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 21:09:58.138287   60948 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.138333   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138491   60948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 21:09:58.138532   60948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.138594   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145060   60948 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 21:09:58.145116   60948 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 21:09:58.145146   60948 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 21:09:58.145177   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145185   60948 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.145225   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145073   60948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 21:09:58.145250   60948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.145271   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145322   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:58.268621   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.268721   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.268774   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.268826   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 21:09:58.268863   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.268895   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.268956   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 21:09:58.408748   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 21:09:58.418795   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 21:09:58.418843   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 21:09:58.420451   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 21:09:58.420516   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 21:09:58.420577   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.420585   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 21:09:58.425621   60948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 21:09:58.425639   60948 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.425684   60948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 21:09:59.172682   60948 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 21:09:59.172736   60948 cache_images.go:92] LoadImages completed in 1.449590507s
	W1212 21:09:59.172819   60948 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 21:09:59.172900   60948 ssh_runner.go:195] Run: crio config
	I1212 21:09:59.238502   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:09:59.238522   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:59.238539   60948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:59.238560   60948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-372099 NodeName:old-k8s-version-372099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 21:09:59.238733   60948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-372099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-372099
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.202:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:59.238886   60948 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-372099 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:59.238953   60948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 21:09:59.249183   60948 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:59.249271   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:59.263171   60948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 21:09:59.281172   60948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:59.302622   60948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 21:09:59.323131   60948 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:59.327344   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:59.342182   60948 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099 for IP: 192.168.39.202
	I1212 21:09:59.342216   60948 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:59.342412   60948 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:59.342465   60948 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:59.342554   60948 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/client.key
	I1212 21:09:59.342659   60948 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key.9e66e972
	I1212 21:09:59.342723   60948 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key
	I1212 21:09:59.342854   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:59.342891   60948 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:59.342908   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:59.342947   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:59.342984   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:59.343024   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:59.343081   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:59.343948   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:59.375250   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:59.404892   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:59.434762   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:09:59.465696   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:59.496528   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:59.521739   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:59.545606   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:59.574153   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:59.599089   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:59.625217   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:59.654715   60948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:59.674946   60948 ssh_runner.go:195] Run: openssl version
	I1212 21:09:59.683295   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:59.697159   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702671   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702745   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.710931   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:59.723204   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:59.735713   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741621   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741715   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.748041   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:59.760217   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:59.772701   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778501   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778589   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.787066   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:59.803355   60948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:59.809920   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:59.819093   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:59.827918   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:59.836228   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:59.845437   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:59.852647   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:59.861170   60948 kubeadm.go:404] StartCluster: {Name:old-k8s-version-372099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:59.861285   60948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:59.861358   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:59.906807   60948 cri.go:89] found id: ""
	I1212 21:09:59.906885   60948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:59.919539   60948 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:59.919579   60948 kubeadm.go:636] restartCluster start
	I1212 21:09:59.919637   60948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:59.930547   60948 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.931845   60948 kubeconfig.go:92] found "old-k8s-version-372099" server: "https://192.168.39.202:8443"
	I1212 21:09:59.934471   60948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:59.945701   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.945780   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.959415   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.959438   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.959496   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.975677   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.476388   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.476469   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.493781   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.976367   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.976475   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.993084   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.476277   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.476362   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.490076   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.976393   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.976505   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.990771   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:57.905327   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905730   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:57.905649   62039 retry.go:31] will retry after 1.427858275s: waiting for machine to come up
	I1212 21:09:59.335284   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335735   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:59.335630   62039 retry.go:31] will retry after 1.773169552s: waiting for machine to come up
	I1212 21:10:01.110044   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110533   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:01.110468   62039 retry.go:31] will retry after 2.199207847s: waiting for machine to come up
	I1212 21:09:57.672094   60833 addons.go:502] enable addons completed in 2.846462968s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:09:59.822907   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:01.824673   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:02.325980   60833 node_ready.go:49] node "embed-certs-831188" has status "Ready":"True"
	I1212 21:10:02.326008   60833 node_ready.go:38] duration metric: took 7.016985612s waiting for node "embed-certs-831188" to be "Ready" ...
	I1212 21:10:02.326021   60833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:02.339547   60833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345609   60833 pod_ready.go:92] pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.345638   60833 pod_ready.go:81] duration metric: took 6.052243ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345652   60833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.476354   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.476429   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.489326   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:02.975846   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.975935   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.992975   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.476463   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.476577   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.489471   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.975762   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.975891   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.992773   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.476395   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.476510   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.489163   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.976403   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.976503   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.990508   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.475988   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.476108   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.489347   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.975811   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.975874   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.988996   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.475817   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.475896   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.487886   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.976376   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.976445   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.988627   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.312460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312859   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312892   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:03.312807   62039 retry.go:31] will retry after 4.329332977s: waiting for machine to come up
	I1212 21:10:02.864894   60833 pod_ready.go:92] pod "etcd-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.864921   60833 pod_ready.go:81] duration metric: took 519.26143ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.864935   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871360   60833 pod_ready.go:92] pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.871392   60833 pod_ready.go:81] duration metric: took 6.449389ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871406   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529203   60833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.529228   60833 pod_ready.go:81] duration metric: took 1.657813273s waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529243   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722607   60833 pod_ready.go:92] pod "kube-proxy-nsv4w" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.722631   60833 pod_ready.go:81] duration metric: took 193.381057ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722641   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124360   60833 pod_ready.go:92] pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:05.124388   60833 pod_ready.go:81] duration metric: took 401.739767ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124401   60833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:07.476521   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.476603   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.487362   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:07.976016   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.976101   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.987221   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.475793   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.475894   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.486641   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.976140   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.976262   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.987507   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.476080   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:09.476168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:09.487537   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.946342   60948 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:09.946377   60948 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:09.946412   60948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:09.946487   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:09.988850   60948 cri.go:89] found id: ""
	I1212 21:10:09.988939   60948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:10.004726   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:10.015722   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:10.015787   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025706   60948 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025743   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:10.156614   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.030056   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.219060   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.315587   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.398016   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:11.398110   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.411642   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.927297   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:07.644473   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644921   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644950   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:07.644868   62039 retry.go:31] will retry after 5.180616294s: waiting for machine to come up
	I1212 21:10:07.428366   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:09.929940   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.157275   60628 start.go:369] acquired machines lock for "no-preload-343495" in 1m3.684137096s
	I1212 21:10:14.157330   60628 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:10:14.157342   60628 fix.go:54] fixHost starting: 
	I1212 21:10:14.157767   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:14.157812   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:14.175936   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 21:10:14.176421   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:14.176957   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:10:14.176982   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:14.177380   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:14.177601   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:14.177804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:10:14.179672   60628 fix.go:102] recreateIfNeeded on no-preload-343495: state=Stopped err=<nil>
	I1212 21:10:14.179696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	W1212 21:10:14.179911   60628 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:10:14.183064   60628 out.go:177] * Restarting existing kvm2 VM for "no-preload-343495" ...
	I1212 21:10:12.828825   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.829471   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Found IP for machine: 192.168.72.253
	I1212 21:10:12.829501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserving static IP address...
	I1212 21:10:12.829530   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has current primary IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.830061   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.830110   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | skip adding static IP to network mk-default-k8s-diff-port-171828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"}
	I1212 21:10:12.830133   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserved static IP address: 192.168.72.253
	I1212 21:10:12.830152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Getting to WaitForSSH function...
	I1212 21:10:12.830163   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for SSH to be available...
	I1212 21:10:12.832654   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833033   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.833065   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833273   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH client type: external
	I1212 21:10:12.833302   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa (-rw-------)
	I1212 21:10:12.833335   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:12.833352   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | About to run SSH command:
	I1212 21:10:12.833370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | exit 0
	I1212 21:10:12.931871   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:12.932439   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetConfigRaw
	I1212 21:10:12.933250   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:12.936555   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937009   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.937051   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937341   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:10:12.937642   61298 machine.go:88] provisioning docker machine ...
	I1212 21:10:12.937669   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:12.937933   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938136   61298 buildroot.go:166] provisioning hostname "default-k8s-diff-port-171828"
	I1212 21:10:12.938161   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938373   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:12.941209   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941589   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.941620   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941796   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:12.941978   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942183   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:12.942539   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:12.942885   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:12.942904   61298 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-171828 && echo "default-k8s-diff-port-171828" | sudo tee /etc/hostname
	I1212 21:10:13.099123   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-171828
	
	I1212 21:10:13.099152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.102085   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.102496   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102756   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.102965   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103166   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.103580   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.104000   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.104034   61298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-171828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-171828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-171828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:13.246501   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:13.246535   61298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:13.246561   61298 buildroot.go:174] setting up certificates
	I1212 21:10:13.246577   61298 provision.go:83] configureAuth start
	I1212 21:10:13.246590   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:13.246875   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:13.249703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250010   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.250043   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250196   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.252501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.252814   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.252852   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.253086   61298 provision.go:138] copyHostCerts
	I1212 21:10:13.253151   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:13.253171   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:13.253266   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:13.253399   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:13.253412   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:13.253437   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:13.253501   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:13.253508   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:13.253526   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:13.253586   61298 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-171828 san=[192.168.72.253 192.168.72.253 localhost 127.0.0.1 minikube default-k8s-diff-port-171828]
	I1212 21:10:13.331755   61298 provision.go:172] copyRemoteCerts
	I1212 21:10:13.331819   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:13.331841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.334412   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334741   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.334777   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334981   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.335185   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.335369   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.335498   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.429448   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:10:13.454350   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:13.479200   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 21:10:13.505120   61298 provision.go:86] duration metric: configureAuth took 258.53005ms
	I1212 21:10:13.505151   61298 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:13.505370   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:13.505451   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.508400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.508826   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.508858   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.509144   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.509360   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509524   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.509829   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.510161   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.510184   61298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:13.874783   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:13.874810   61298 machine.go:91] provisioned docker machine in 937.151566ms
	I1212 21:10:13.874822   61298 start.go:300] post-start starting for "default-k8s-diff-port-171828" (driver="kvm2")
	I1212 21:10:13.874835   61298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:13.874853   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:13.875182   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:13.875213   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.877937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.878400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878640   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.878819   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.878984   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.879148   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.978276   61298 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:13.984077   61298 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:13.984114   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:13.984229   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:13.984309   61298 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:13.984391   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:13.996801   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:14.021773   61298 start.go:303] post-start completed in 146.935628ms
	I1212 21:10:14.021796   61298 fix.go:56] fixHost completed within 24.013191129s
	I1212 21:10:14.021815   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.024847   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.025227   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.025599   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025788   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025951   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.026106   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:14.026436   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:14.026452   61298 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:14.157053   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415414.138141396
	
	I1212 21:10:14.157082   61298 fix.go:206] guest clock: 1702415414.138141396
	I1212 21:10:14.157092   61298 fix.go:219] Guest: 2023-12-12 21:10:14.138141396 +0000 UTC Remote: 2023-12-12 21:10:14.021800288 +0000 UTC m=+251.962428882 (delta=116.341108ms)
	I1212 21:10:14.157130   61298 fix.go:190] guest clock delta is within tolerance: 116.341108ms
	I1212 21:10:14.157141   61298 start.go:83] releasing machines lock for "default-k8s-diff-port-171828", held for 24.148576854s
	I1212 21:10:14.157193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.157567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:14.160748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161134   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.161172   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161489   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162089   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162259   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162333   61298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:14.162389   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.162627   61298 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:14.162652   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.165726   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.165941   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166485   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166548   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166598   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166636   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166649   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166905   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166907   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167104   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167153   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167231   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.167349   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167500   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.294350   61298 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:14.301705   61298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:14.459967   61298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:14.467979   61298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:14.468043   61298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:14.483883   61298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:14.483910   61298 start.go:475] detecting cgroup driver to use...
	I1212 21:10:14.483976   61298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:14.498105   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:14.511716   61298 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:14.511784   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:14.525795   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:14.539213   61298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:14.658453   61298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:14.786222   61298 docker.go:219] disabling docker service ...
	I1212 21:10:14.786296   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:14.801656   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:14.814821   61298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:14.950542   61298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:15.085306   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:15.098508   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:15.118634   61298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:15.118709   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.130579   61298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:15.130667   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.140672   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.150340   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.161966   61298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:15.173049   61298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:15.181620   61298 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:15.181703   61298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:15.195505   61298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:15.204076   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:15.327587   61298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:15.505003   61298 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:15.505078   61298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:15.512282   61298 start.go:543] Will wait 60s for crictl version
	I1212 21:10:15.512349   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:10:15.516564   61298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:15.556821   61298 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:15.556906   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.612743   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.665980   61298 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:10:12.426883   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.927168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.962834   60948 api_server.go:72] duration metric: took 1.56481721s to wait for apiserver process to appear ...
	I1212 21:10:12.962862   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:12.962890   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.963447   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:12.963489   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.964022   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:13.464393   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:15.667323   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:15.670368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.670769   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:15.670804   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.671037   61298 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:15.675575   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:15.688523   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:10:15.688602   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:15.739601   61298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:10:15.739718   61298 ssh_runner.go:195] Run: which lz4
	I1212 21:10:15.744272   61298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:10:15.749574   61298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:10:15.749612   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:10:12.428614   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.430542   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:16.442797   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.184429   60628 main.go:141] libmachine: (no-preload-343495) Calling .Start
	I1212 21:10:14.184692   60628 main.go:141] libmachine: (no-preload-343495) Ensuring networks are active...
	I1212 21:10:14.186580   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network default is active
	I1212 21:10:14.187398   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network mk-no-preload-343495 is active
	I1212 21:10:14.188587   60628 main.go:141] libmachine: (no-preload-343495) Getting domain xml...
	I1212 21:10:14.189457   60628 main.go:141] libmachine: (no-preload-343495) Creating domain...
	I1212 21:10:15.509306   60628 main.go:141] libmachine: (no-preload-343495) Waiting to get IP...
	I1212 21:10:15.510320   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.510728   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.510772   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.510702   62255 retry.go:31] will retry after 275.567053ms: waiting for machine to come up
	I1212 21:10:15.788793   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.789233   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.789262   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.789193   62255 retry.go:31] will retry after 341.343409ms: waiting for machine to come up
	I1212 21:10:16.131936   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.132427   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.132452   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.132377   62255 retry.go:31] will retry after 302.905542ms: waiting for machine to come up
	I1212 21:10:16.437184   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.437944   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.437968   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.437850   62255 retry.go:31] will retry after 407.178114ms: waiting for machine to come up
	I1212 21:10:16.846738   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.847393   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.847429   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.847349   62255 retry.go:31] will retry after 507.703222ms: waiting for machine to come up
	I1212 21:10:17.357373   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:17.357975   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:17.358005   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:17.357907   62255 retry.go:31] will retry after 920.403188ms: waiting for machine to come up
	I1212 21:10:18.464726   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 21:10:18.464781   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.736922   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.736969   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.736990   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.816132   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.816165   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.964508   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.012996   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.013048   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.464538   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.509558   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.509601   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.965183   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:21.369579   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:10:21.381334   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:10:21.381365   60948 api_server.go:131] duration metric: took 8.418495294s to wait for apiserver health ...
	I1212 21:10:21.381378   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.381385   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.501371   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:21.801933   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:21.827010   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:21.853900   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:17.641827   61298 crio.go:444] Took 1.897583 seconds to copy over tarball
	I1212 21:10:17.641919   61298 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:10:21.283045   61298 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.641094924s)
	I1212 21:10:21.283076   61298 crio.go:451] Took 3.641222 seconds to extract the tarball
	I1212 21:10:21.283088   61298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:10:21.328123   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:21.387894   61298 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:10:21.387923   61298 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:10:21.387996   61298 ssh_runner.go:195] Run: crio config
	I1212 21:10:21.467191   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.467216   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.467255   61298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:21.467278   61298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-171828 NodeName:default-k8s-diff-port-171828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:21.467443   61298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-171828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:21.467537   61298 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-171828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 21:10:21.467596   61298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:10:21.478940   61298 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:21.479024   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:21.492604   61298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 21:10:21.514260   61298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:10:21.535059   61298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 21:10:21.557074   61298 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:21.562765   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:21.578989   61298 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828 for IP: 192.168.72.253
	I1212 21:10:21.579047   61298 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:21.579282   61298 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:21.579383   61298 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:21.579495   61298 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/client.key
	I1212 21:10:21.768212   61298 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key.a1600f99
	I1212 21:10:21.768305   61298 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key
	I1212 21:10:21.768447   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:21.768489   61298 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:21.768504   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:21.768542   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:21.768596   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:21.768625   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:21.768680   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:21.769557   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:21.800794   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:10:21.833001   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:21.864028   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:21.893107   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:21.918580   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:21.944095   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:21.970251   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:21.998947   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:22.027620   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:22.056851   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:22.084321   61298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:22.103273   61298 ssh_runner.go:195] Run: openssl version
	I1212 21:10:22.109518   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:18.932477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:21.431431   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:18.280164   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:18.280656   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:18.280687   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:18.280612   62255 retry.go:31] will retry after 761.825655ms: waiting for machine to come up
	I1212 21:10:19.043686   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:19.044170   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:19.044203   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:19.044117   62255 retry.go:31] will retry after 1.173408436s: waiting for machine to come up
	I1212 21:10:20.218938   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:20.219457   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:20.219488   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:20.219412   62255 retry.go:31] will retry after 1.484817124s: waiting for machine to come up
	I1212 21:10:21.706027   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:21.706505   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:21.706536   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:21.706467   62255 retry.go:31] will retry after 2.260831172s: waiting for machine to come up
	I1212 21:10:22.159195   60948 system_pods.go:59] 7 kube-system pods found
	I1212 21:10:22.284903   60948 system_pods.go:61] "coredns-5644d7b6d9-slvnx" [0db32241-69df-48dc-a60f-6921f9c5746f] Running
	I1212 21:10:22.284916   60948 system_pods.go:61] "etcd-old-k8s-version-372099" [72d219cb-b393-423d-ba62-b880bd2d26a0] Running
	I1212 21:10:22.284924   60948 system_pods.go:61] "kube-apiserver-old-k8s-version-372099" [c4f09d2d-07d2-4403-886b-37cb1471e7e5] Running
	I1212 21:10:22.284932   60948 system_pods.go:61] "kube-controller-manager-old-k8s-version-372099" [4a17c60c-2c72-4296-a7e4-0ae05e7bfa39] Running
	I1212 21:10:22.284939   60948 system_pods.go:61] "kube-proxy-5mvzb" [ec7c6540-35e2-4ae4-8592-d797132a8328] Running
	I1212 21:10:22.284945   60948 system_pods.go:61] "kube-scheduler-old-k8s-version-372099" [472284a4-9340-4bbc-8a1f-b9b55f4b0c3c] Running
	I1212 21:10:22.284952   60948 system_pods.go:61] "storage-provisioner" [b9fcec5f-bd1f-4c47-95cd-a9c8e3011e50] Running
	I1212 21:10:22.284961   60948 system_pods.go:74] duration metric: took 431.035724ms to wait for pod list to return data ...
	I1212 21:10:22.284990   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:22.592700   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:22.592734   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:22.592748   60948 node_conditions.go:105] duration metric: took 307.751463ms to run NodePressure ...
	I1212 21:10:22.592770   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:23.483331   60948 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:23.500661   60948 retry.go:31] will retry after 162.846257ms: kubelet not initialised
	I1212 21:10:23.669569   60948 retry.go:31] will retry after 257.344573ms: kubelet not initialised
	I1212 21:10:23.942373   60948 retry.go:31] will retry after 538.191385ms: kubelet not initialised
	I1212 21:10:24.487436   60948 retry.go:31] will retry after 635.824669ms: kubelet not initialised
	I1212 21:10:25.129226   60948 retry.go:31] will retry after 946.117517ms: kubelet not initialised
	I1212 21:10:26.082106   60948 retry.go:31] will retry after 2.374588936s: kubelet not initialised
	I1212 21:10:22.121093   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291519   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291585   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.297989   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:22.309847   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:22.321817   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326715   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326766   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.333001   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:22.345044   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:22.357827   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362795   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362858   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.368864   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:22.380605   61298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:22.385986   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:22.392931   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:22.399683   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:22.407203   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:22.414730   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:22.421808   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:22.430050   61298 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:22.430205   61298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:22.430263   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:22.482907   61298 cri.go:89] found id: ""
	I1212 21:10:22.482981   61298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:22.495001   61298 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:22.495032   61298 kubeadm.go:636] restartCluster start
	I1212 21:10:22.495104   61298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:22.506418   61298 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.508078   61298 kubeconfig.go:92] found "default-k8s-diff-port-171828" server: "https://192.168.72.253:8444"
	I1212 21:10:22.511809   61298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:22.523641   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.523703   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.536887   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.536913   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.536965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.549418   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.050111   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.050218   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.063845   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.550201   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.550303   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.567468   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.050021   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.050193   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.064792   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.550119   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.550213   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.568169   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.049891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.049997   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.063341   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.549592   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.549682   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.564096   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.049596   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.049701   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.063482   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.549680   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.549793   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.563956   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.049482   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.049614   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.062881   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.440487   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:25.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:23.969715   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:23.970242   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:23.970272   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:23.970200   62255 retry.go:31] will retry after 1.769886418s: waiting for machine to come up
	I1212 21:10:25.741628   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:25.742060   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:25.742098   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:25.742014   62255 retry.go:31] will retry after 2.283589137s: waiting for machine to come up
	I1212 21:10:28.462838   60948 retry.go:31] will retry after 1.809333362s: kubelet not initialised
	I1212 21:10:30.278747   60948 retry.go:31] will retry after 4.059791455s: kubelet not initialised
	I1212 21:10:27.550084   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.550176   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.564365   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.049688   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.049771   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.065367   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.549922   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.550009   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.566964   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.049535   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.049643   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.062264   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.549891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.549970   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.563687   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.050397   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.050492   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.065602   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.550210   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.550298   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.562793   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.050281   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.050374   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.064836   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.550407   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.550527   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.563474   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:32.049593   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:32.049689   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:32.062459   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.935166   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:30.429274   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:28.028345   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:28.028796   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:28.028824   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:28.028757   62255 retry.go:31] will retry after 4.021160394s: waiting for machine to come up
	I1212 21:10:32.052992   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:32.053479   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:32.053506   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:32.053442   62255 retry.go:31] will retry after 4.864494505s: waiting for machine to come up
	I1212 21:10:34.344571   60948 retry.go:31] will retry after 9.338953291s: kubelet not initialised
	I1212 21:10:32.524460   61298 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:32.524492   61298 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:32.524523   61298 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:32.524586   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:32.565596   61298 cri.go:89] found id: ""
	I1212 21:10:32.565685   61298 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:32.582458   61298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:32.592539   61298 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:32.592615   61298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603658   61298 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603683   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:32.730418   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.535390   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.742601   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.839081   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.909128   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:33.909209   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:33.928197   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.452146   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.952473   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.452270   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.952431   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.451626   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.482100   61298 api_server.go:72] duration metric: took 2.572973799s to wait for apiserver process to appear ...
	I1212 21:10:36.482125   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:36.482154   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.482833   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.482869   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.483345   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.984105   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:32.433032   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:34.928686   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.930503   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.920697   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921201   60628 main.go:141] libmachine: (no-preload-343495) Found IP for machine: 192.168.61.176
	I1212 21:10:36.921235   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has current primary IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921248   60628 main.go:141] libmachine: (no-preload-343495) Reserving static IP address...
	I1212 21:10:36.921719   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.921757   60628 main.go:141] libmachine: (no-preload-343495) DBG | skip adding static IP to network mk-no-preload-343495 - found existing host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"}
	I1212 21:10:36.921770   60628 main.go:141] libmachine: (no-preload-343495) Reserved static IP address: 192.168.61.176
	I1212 21:10:36.921785   60628 main.go:141] libmachine: (no-preload-343495) Waiting for SSH to be available...
	I1212 21:10:36.921802   60628 main.go:141] libmachine: (no-preload-343495) DBG | Getting to WaitForSSH function...
	I1212 21:10:36.924581   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.924908   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.924941   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.925154   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH client type: external
	I1212 21:10:36.925191   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa (-rw-------)
	I1212 21:10:36.925223   60628 main.go:141] libmachine: (no-preload-343495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:36.925234   60628 main.go:141] libmachine: (no-preload-343495) DBG | About to run SSH command:
	I1212 21:10:36.925246   60628 main.go:141] libmachine: (no-preload-343495) DBG | exit 0
	I1212 21:10:37.059619   60628 main.go:141] libmachine: (no-preload-343495) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:37.060017   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetConfigRaw
	I1212 21:10:37.060752   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.063599   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064325   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.064365   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064468   60628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/config.json ...
	I1212 21:10:37.064705   60628 machine.go:88] provisioning docker machine ...
	I1212 21:10:37.064733   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:37.064938   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065115   60628 buildroot.go:166] provisioning hostname "no-preload-343495"
	I1212 21:10:37.065144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065286   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.068118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068517   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.068548   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.068980   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069141   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069312   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.069507   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.069958   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.069985   60628 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-343495 && echo "no-preload-343495" | sudo tee /etc/hostname
	I1212 21:10:37.212905   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-343495
	
	I1212 21:10:37.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.215789   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216147   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.216182   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.216525   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216704   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216877   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.217037   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.217425   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.217444   60628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-343495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-343495/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-343495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:37.355687   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:37.355721   60628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:37.355754   60628 buildroot.go:174] setting up certificates
	I1212 21:10:37.355767   60628 provision.go:83] configureAuth start
	I1212 21:10:37.355780   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.356089   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.359197   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359644   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.359717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359937   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.362695   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363043   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.363079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363251   60628 provision.go:138] copyHostCerts
	I1212 21:10:37.363316   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:37.363336   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:37.363410   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:37.363536   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:37.363549   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:37.363585   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:37.363671   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:37.363677   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:37.363703   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:37.363757   60628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.no-preload-343495 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-343495]
	I1212 21:10:37.526121   60628 provision.go:172] copyRemoteCerts
	I1212 21:10:37.526205   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:37.526234   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.529079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529425   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.529492   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529659   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.529850   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.530009   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.530153   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:37.632384   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:37.661242   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:10:37.689215   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:10:37.714781   60628 provision.go:86] duration metric: configureAuth took 358.999712ms
	I1212 21:10:37.714819   60628 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:37.715040   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:10:37.715144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.718379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.718815   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.718844   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.719212   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.719422   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719625   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719789   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.719975   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.720484   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.720519   60628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:38.062630   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:38.062660   60628 machine.go:91] provisioned docker machine in 997.934774ms
	I1212 21:10:38.062673   60628 start.go:300] post-start starting for "no-preload-343495" (driver="kvm2")
	I1212 21:10:38.062687   60628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:38.062707   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.062999   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:38.063033   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.065898   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066299   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.066331   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066626   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.066878   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.067063   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.067228   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.164612   60628 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:38.170132   60628 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:38.170162   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:38.170244   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:38.170351   60628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:38.170467   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:38.181959   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:38.208734   60628 start.go:303] post-start completed in 146.045424ms
	I1212 21:10:38.208762   60628 fix.go:56] fixHost completed within 24.051421131s
	I1212 21:10:38.208782   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.212118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212519   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.212551   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212732   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213124   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213268   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.213436   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:38.213801   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:38.213827   60628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:38.337185   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415438.279018484
	
	I1212 21:10:38.337225   60628 fix.go:206] guest clock: 1702415438.279018484
	I1212 21:10:38.337239   60628 fix.go:219] Guest: 2023-12-12 21:10:38.279018484 +0000 UTC Remote: 2023-12-12 21:10:38.208766005 +0000 UTC m=+370.324656490 (delta=70.252479ms)
	I1212 21:10:38.337264   60628 fix.go:190] guest clock delta is within tolerance: 70.252479ms
	I1212 21:10:38.337275   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 24.179969571s
	I1212 21:10:38.337305   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.337527   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:38.340658   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341019   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.341053   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341233   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.341952   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342179   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342291   60628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:38.342336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.342388   60628 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:38.342413   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.345379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345419   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345762   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345809   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345841   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345864   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.346049   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346055   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346433   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346438   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346597   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.346596   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.467200   60628 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:38.475578   60628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:38.627838   60628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:38.634520   60628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:38.634614   60628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:38.654823   60628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:38.654847   60628 start.go:475] detecting cgroup driver to use...
	I1212 21:10:38.654928   60628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:38.673550   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:38.691252   60628 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:38.691318   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:38.707542   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:38.724686   60628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:38.843033   60628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:38.973535   60628 docker.go:219] disabling docker service ...
	I1212 21:10:38.973610   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:38.987940   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:39.001346   60628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:39.105401   60628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:39.209198   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:39.222268   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:39.243154   60628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:39.243226   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.253418   60628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:39.253497   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.263273   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.274546   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.284359   60628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:39.294828   60628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:39.304818   60628 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:39.304894   60628 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:39.318541   60628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:39.328819   60628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:39.439285   60628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:39.619385   60628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:39.619462   60628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:39.625279   60628 start.go:543] Will wait 60s for crictl version
	I1212 21:10:39.625358   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:39.630234   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:39.680505   60628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:39.680579   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.736272   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.796111   60628 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:10:39.732208   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.732243   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.732258   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.761735   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.761771   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.984129   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.990620   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:39.990650   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.484444   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.492006   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:40.492039   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.983459   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.990813   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:10:41.001024   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:10:41.001055   61298 api_server.go:131] duration metric: took 4.518922579s to wait for apiserver health ...
	I1212 21:10:41.001070   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:41.001078   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:41.003043   61298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:41.004669   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:41.084775   61298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:41.173688   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:41.201100   61298 system_pods.go:59] 9 kube-system pods found
	I1212 21:10:41.201132   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201140   61298 system_pods.go:61] "coredns-5dd5756b68-hc52p" [f8895d1e-3484-4ffe-9d11-f5e4b7617c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201148   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:10:41.201158   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:10:41.201165   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:10:41.201171   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:10:41.201177   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:10:41.201182   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:10:41.201187   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:10:41.201193   61298 system_pods.go:74] duration metric: took 27.476871ms to wait for pod list to return data ...
	I1212 21:10:41.201203   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:41.205597   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:41.205624   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:41.205638   61298 node_conditions.go:105] duration metric: took 4.431218ms to run NodePressure ...
	I1212 21:10:41.205653   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:41.516976   61298 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529555   61298 kubeadm.go:787] kubelet initialised
	I1212 21:10:41.529592   61298 kubeadm.go:788] duration metric: took 12.533051ms waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529601   61298 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:41.538991   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.546618   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546645   61298 pod_ready.go:81] duration metric: took 7.620954ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.546658   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546667   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.556921   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556951   61298 pod_ready.go:81] duration metric: took 10.273719ms waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.556963   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556972   61298 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.563538   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563570   61298 pod_ready.go:81] duration metric: took 6.584443ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.563586   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563598   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.578973   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579009   61298 pod_ready.go:81] duration metric: took 15.402148ms waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.579025   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579046   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.978938   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978972   61298 pod_ready.go:81] duration metric: took 399.914995ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.978990   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978999   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:38.930743   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:41.429587   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:39.798106   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:39.800962   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801364   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:39.801399   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801592   60628 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:39.806328   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:39.821949   60628 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:10:39.822014   60628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:39.873704   60628 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:10:39.873733   60628 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:10:39.873820   60628 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.873840   60628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.873859   60628 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:39.874021   60628 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.874062   60628 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 21:10:39.874043   60628 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.873836   60628 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.874359   60628 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.875369   60628 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875379   60628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.875390   60628 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 21:10:39.875428   60628 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.875284   60628 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.875803   60628 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.060906   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.061267   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.063065   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.074673   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 21:10:40.076082   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.080787   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.108962   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.169237   60628 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 21:10:40.169289   60628 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.169363   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.172419   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.251588   60628 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 21:10:40.251638   60628 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.251684   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.264051   60628 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 21:10:40.264146   60628 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.264227   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397546   60628 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 21:10:40.397590   60628 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.397640   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397669   60628 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 21:10:40.397709   60628 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.397774   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397876   60628 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 21:10:40.397978   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.398033   60628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:10:40.398064   60628 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.398079   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.398105   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397976   60628 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.398142   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.398143   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.418430   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.418500   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.530581   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530693   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.530781   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530584   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 21:10:40.530918   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:40.544770   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.544970   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 21:10:40.545108   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:40.567016   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567130   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567196   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.567297   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.604461   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 21:10:40.604484   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.604531   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 21:10:40.604488   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604644   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604590   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.612665   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 21:10:40.612741   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612794   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612800   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 21:10:40.612935   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:40.615786   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 21:10:42.378453   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378486   61298 pod_ready.go:81] duration metric: took 399.478547ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.378499   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378508   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:42.778834   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778871   61298 pod_ready.go:81] duration metric: took 400.345358ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.778887   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778897   61298 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:43.179851   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179879   61298 pod_ready.go:81] duration metric: took 400.97377ms waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:43.179891   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179898   61298 pod_ready.go:38] duration metric: took 1.6502873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:43.179913   61298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:10:43.196087   61298 ops.go:34] apiserver oom_adj: -16
	I1212 21:10:43.196114   61298 kubeadm.go:640] restartCluster took 20.701074763s
	I1212 21:10:43.196126   61298 kubeadm.go:406] StartCluster complete in 20.766085453s
	I1212 21:10:43.196146   61298 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.196225   61298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:10:43.198844   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.199122   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:10:43.199268   61298 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:10:43.199342   61298 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199363   61298 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199372   61298 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:10:43.199396   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:43.199456   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199373   61298 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199492   61298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-171828"
	I1212 21:10:43.199389   61298 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199551   61298 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199568   61298 addons.go:240] addon metrics-server should already be in state true
	I1212 21:10:43.199637   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199891   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199915   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.199922   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199945   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.200148   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.200177   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.218067   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I1212 21:10:43.218679   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1212 21:10:43.218817   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219111   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219234   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1212 21:10:43.219356   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219372   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219590   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219607   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219699   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219807   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220061   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220258   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.220278   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.220324   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.220436   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.220488   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.220676   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.221418   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.221444   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.224718   61298 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.224742   61298 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:10:43.224769   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.225189   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.225227   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.225431   61298 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-171828" context rescaled to 1 replicas
	I1212 21:10:43.225467   61298 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:10:43.228523   61298 out.go:177] * Verifying Kubernetes components...
	I1212 21:10:43.230002   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:10:43.239165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1212 21:10:43.239749   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.240357   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.240383   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.240761   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.240937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.241446   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1212 21:10:43.241951   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.242522   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.242541   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.242864   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.242931   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.244753   61298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:43.243219   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.246309   61298 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.246332   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:10:43.246358   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.248809   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.250840   61298 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:10:43.252430   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:10:43.251041   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.250309   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.247068   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I1212 21:10:43.252596   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:10:43.252622   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.252718   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.252745   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.253368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.253677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.253846   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.254434   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.259686   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.259697   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.259748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259844   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.259883   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.259973   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.260149   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.260361   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.260420   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.261546   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.261594   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.284357   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I1212 21:10:43.284945   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.285431   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.285444   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.286009   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.286222   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.288257   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.288542   61298 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.288565   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:10:43.288586   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.291842   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.292527   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.292680   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.293076   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.293350   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.293512   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.293683   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.405154   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.426115   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:10:43.426141   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:10:43.486953   61298 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:10:43.486975   61298 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:43.491689   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:10:43.491709   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:10:43.505611   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.538745   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:43.538785   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:10:43.600598   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:44.933368   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.528176624s)
	I1212 21:10:44.933442   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.427784857s)
	I1212 21:10:44.933493   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933511   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933539   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332913009s)
	I1212 21:10:44.933496   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933559   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933566   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933569   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933926   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.933943   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933944   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.933955   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933964   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933974   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934081   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934096   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934118   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934120   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934127   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934132   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934138   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934156   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934372   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934397   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934401   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934808   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934845   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934858   61298 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-171828"
	I1212 21:10:44.937727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.937783   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.937806   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.945948   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.945966   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.946202   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.946220   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.949385   61298 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1212 21:10:43.688668   60948 retry.go:31] will retry after 13.919612963s: kubelet not initialised
	I1212 21:10:44.951009   61298 addons.go:502] enable addons completed in 1.751742212s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1212 21:10:45.583280   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.432062   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:45.929995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.305027541s)
	I1212 21:10:43.909740   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.296738263s)
	I1212 21:10:43.909764   60628 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:43.909770   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 21:10:43.909810   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:45.879475   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969630074s)
	I1212 21:10:45.879502   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 21:10:45.879527   60628 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:45.879592   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:47.584004   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:50.113807   61298 node_ready.go:49] node "default-k8s-diff-port-171828" has status "Ready":"True"
	I1212 21:10:50.113837   61298 node_ready.go:38] duration metric: took 6.626786171s waiting for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:50.113850   61298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:50.128903   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656130   61298 pod_ready.go:92] pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:50.656153   61298 pod_ready.go:81] duration metric: took 527.212389ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656161   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:47.931716   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.433176   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.267864   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.388242252s)
	I1212 21:10:50.267898   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 21:10:50.267931   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:50.267977   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:52.845895   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.577890173s)
	I1212 21:10:52.845935   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 21:10:52.845969   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.846023   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.677971   61298 pod_ready.go:102] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:53.179154   61298 pod_ready.go:92] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.179186   61298 pod_ready.go:81] duration metric: took 2.523018353s waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.179200   61298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185649   61298 pod_ready.go:92] pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.185673   61298 pod_ready.go:81] duration metric: took 6.463925ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185685   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193280   61298 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.193303   61298 pod_ready.go:81] duration metric: took 1.00761061s waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193313   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484196   61298 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.484223   61298 pod_ready.go:81] duration metric: took 290.902142ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484240   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883746   61298 pod_ready.go:92] pod "kube-proxy-47qmb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.883773   61298 pod_ready.go:81] duration metric: took 399.524854ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883784   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283637   61298 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:55.283670   61298 pod_ready.go:81] duration metric: took 399.871874ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283684   61298 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:52.931372   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.932174   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.204367   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.358317317s)
	I1212 21:10:54.204393   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 21:10:54.204425   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:54.204485   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:56.066774   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.862261726s)
	I1212 21:10:56.066802   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 21:10:56.066825   60628 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:56.066874   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:57.118959   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.052055479s)
	I1212 21:10:57.118985   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 21:10:57.119009   60628 cache_images.go:123] Successfully loaded all cached images
	I1212 21:10:57.119021   60628 cache_images.go:92] LoadImages completed in 17.245274715s
	I1212 21:10:57.119103   60628 ssh_runner.go:195] Run: crio config
	I1212 21:10:57.180068   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:10:57.180093   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:57.180109   60628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:57.180127   60628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-343495 NodeName:no-preload-343495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:57.180250   60628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-343495"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:57.180330   60628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-343495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:10:57.180382   60628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:10:57.191949   60628 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:57.192034   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:57.202921   60628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 21:10:57.219512   60628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:10:57.235287   60628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1212 21:10:57.252278   60628 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:57.256511   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:57.268744   60628 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495 for IP: 192.168.61.176
	I1212 21:10:57.268781   60628 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:57.268959   60628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:57.269032   60628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:57.269133   60628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/client.key
	I1212 21:10:57.269228   60628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key.492ad1cf
	I1212 21:10:57.269316   60628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key
	I1212 21:10:57.269466   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:57.269511   60628 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:57.269526   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:57.269562   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:57.269597   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:57.269629   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:57.269685   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:57.270311   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:57.295960   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:10:57.320157   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:57.344434   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:57.368906   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:57.391830   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:57.415954   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:57.441182   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:57.465055   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:57.489788   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:57.513828   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:57.536138   60628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:57.553168   60628 ssh_runner.go:195] Run: openssl version
	I1212 21:10:57.558771   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:57.570141   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574935   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574990   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.580985   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:57.592528   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:57.603477   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608448   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608511   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.614316   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:57.625667   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:57.637284   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642258   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642323   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.648072   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:57.659762   60628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:57.664517   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:57.670385   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:57.676336   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:57.682074   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:57.688387   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:57.694542   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:57.700400   60628 kubeadm.go:404] StartCluster: {Name:no-preload-343495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:57.700520   60628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:57.700576   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:57.738703   60628 cri.go:89] found id: ""
	I1212 21:10:57.738776   60628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:57.749512   60628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:57.749538   60628 kubeadm.go:636] restartCluster start
	I1212 21:10:57.749610   60628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:57.758905   60628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.760000   60628 kubeconfig.go:92] found "no-preload-343495" server: "https://192.168.61.176:8443"
	I1212 21:10:57.762219   60628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:57.773107   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.773181   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.785478   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.785500   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.785554   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.797412   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.613799   60948 retry.go:31] will retry after 13.009137494s: kubelet not initialised
	I1212 21:10:57.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.591232   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:02.093666   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:57.429861   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.429944   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:01.438267   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:58.297630   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.297712   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.312155   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:58.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.797652   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.809726   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.297677   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.309875   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.798441   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.798531   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.810533   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.298154   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.298237   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.310050   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.797683   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.809712   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.298094   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.298224   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.310181   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.797635   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.797742   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.809336   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.297912   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.297997   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.309215   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.797666   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.797749   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.808815   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.590426   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.590850   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.929977   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.429697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.297975   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.298066   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.308865   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:03.798103   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.798207   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.809553   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.297580   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.297653   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.309100   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.797646   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.797767   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.809269   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.297665   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.309281   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.797809   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.797898   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.809794   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.298381   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.298497   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.309467   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.798050   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.798132   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.809758   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.298354   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:07.298434   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:07.309655   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.773157   60628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:11:07.773216   60628 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:11:07.773229   60628 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:11:07.773290   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:11:07.815986   60628 cri.go:89] found id: ""
	I1212 21:11:07.816068   60628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:11:07.832950   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:11:07.842287   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:11:07.842353   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851694   60628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851720   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:10.630075   60948 kubeadm.go:787] kubelet initialised
	I1212 21:11:10.630105   60948 kubeadm.go:788] duration metric: took 47.146743334s waiting for restarted kubelet to initialise ...
	I1212 21:11:10.630116   60948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:10.637891   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644674   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.644700   60948 pod_ready.go:81] duration metric: took 6.771094ms waiting for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644710   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651801   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.651830   60948 pod_ready.go:81] duration metric: took 7.112566ms waiting for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651845   60948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659678   60948 pod_ready.go:92] pod "etcd-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.659700   60948 pod_ready.go:81] duration metric: took 7.845111ms waiting for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659711   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665929   60948 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.665958   60948 pod_ready.go:81] duration metric: took 6.237833ms waiting for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665972   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028938   60948 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.028961   60948 pod_ready.go:81] duration metric: took 362.981718ms waiting for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028973   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428824   60948 pod_ready.go:92] pod "kube-proxy-5mvzb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.428853   60948 pod_ready.go:81] duration metric: took 399.87314ms waiting for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428866   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828546   60948 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.828578   60948 pod_ready.go:81] duration metric: took 399.696769ms waiting for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828590   60948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:09.094309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:11.098257   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:08.928635   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:10.929896   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:07.988857   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.772924   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.980401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.108938   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.189716   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:11:09.189780   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.201432   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.722085   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.222325   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.721931   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.222186   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.721642   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.745977   60628 api_server.go:72] duration metric: took 2.556260463s to wait for apiserver process to appear ...
	I1212 21:11:11.746005   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:11:11.746025   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:14.135897   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.138482   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:13.590920   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.591230   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:12.931314   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.429327   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.294367   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.294401   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.294413   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.347744   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.347780   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.848435   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.853773   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:16.853823   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.348312   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.359543   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.359579   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.848425   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.853966   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.854006   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:18.348644   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:18.373028   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:11:18.385301   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:11:18.385341   60628 api_server.go:131] duration metric: took 6.639327054s to wait for apiserver health ...
	I1212 21:11:18.385353   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:11:18.385362   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:11:18.387289   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:11:18.636422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:20.636472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.592197   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.593157   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:21.594049   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.434254   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.930697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:18.388998   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:11:18.449634   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:11:18.491001   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:11:18.517694   60628 system_pods.go:59] 8 kube-system pods found
	I1212 21:11:18.517729   60628 system_pods.go:61] "coredns-76f75df574-s9jgn" [b13d32b4-a44b-4f79-bece-d0adafef4c7c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:11:18.517740   60628 system_pods.go:61] "etcd-no-preload-343495" [ad48db04-9c79-48e9-a001-1a9061c43cb9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:11:18.517754   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [24d024c1-a89f-4ede-8dbf-7502f0179cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:11:18.517760   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [10ce49e3-2679-4ac5-89aa-9179582ae778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:11:18.517765   60628 system_pods.go:61] "kube-proxy-492l6" [3a2bbe46-0506-490f-aae8-a97e48f3205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:11:18.517773   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [bca80470-c204-4a34-9c7d-5de3ad382c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:11:18.517778   60628 system_pods.go:61] "metrics-server-57f55c9bc5-tmmk4" [11066021-353e-418e-9c7f-78e72dae44a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:11:18.517785   60628 system_pods.go:61] "storage-provisioner" [e681d4cd-f2f6-4cf3-ba09-0f361a64aafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:11:18.517794   60628 system_pods.go:74] duration metric: took 26.756848ms to wait for pod list to return data ...
	I1212 21:11:18.517815   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:11:18.521330   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:11:18.521362   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:11:18.521377   60628 node_conditions.go:105] duration metric: took 3.557177ms to run NodePressure ...
	I1212 21:11:18.521401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:18.945267   60628 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958848   60628 kubeadm.go:787] kubelet initialised
	I1212 21:11:18.958877   60628 kubeadm.go:788] duration metric: took 13.578451ms waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958886   60628 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:18.964819   60628 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:20.987111   60628 pod_ready.go:102] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.494268   60628 pod_ready.go:92] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:22.494299   60628 pod_ready.go:81] duration metric: took 3.529452237s waiting for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:22.494311   60628 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:23.136140   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:25.635800   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.093215   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.590861   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.429921   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.928565   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.929668   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.514490   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.013783   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.637165   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:30.133948   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.091057   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.598428   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:28.930654   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.428436   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.514918   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.514945   60628 pod_ready.go:81] duration metric: took 7.020626508s waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.514955   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524669   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.524696   60628 pod_ready.go:81] duration metric: took 9.734059ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524709   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541808   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.541830   60628 pod_ready.go:81] duration metric: took 17.113672ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541839   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553955   60628 pod_ready.go:92] pod "kube-proxy-492l6" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.553979   60628 pod_ready.go:81] duration metric: took 12.134143ms waiting for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553988   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562798   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.562835   60628 pod_ready.go:81] duration metric: took 8.836628ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562850   60628 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:31.818614   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:32.134558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.135376   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.634429   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.090158   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.091290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.429336   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:35.430448   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.819222   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.318847   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.637527   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:41.134980   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.115262   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.591502   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:37.929700   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:39.929830   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.318911   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.319619   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.319750   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.135558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.635174   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.090309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.590529   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.434126   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.931810   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.818997   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.321699   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.635472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.636294   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.640471   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.590577   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.590885   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.591122   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.429836   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.431518   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.928631   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.823419   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:52.319752   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.137390   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.634152   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.593196   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.089777   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.929750   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:55.932860   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.321554   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.819877   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.136605   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.092816   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.591682   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.429543   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.432255   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:59.318053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.325068   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.137023   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.635397   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.091397   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.094195   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:02.933370   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.430020   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.819751   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:06.319806   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.137648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.635154   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.591471   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.091503   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.430684   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:09.929393   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.319984   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.821053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.637206   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.136850   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.590992   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.591391   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.591744   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.429299   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.429724   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.430114   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:13.329939   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.820117   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.820519   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.199675   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.635179   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.635426   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.091628   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.091739   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:18.929340   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.929933   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.319134   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.819399   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:24.133408   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:26.134293   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:23.093543   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.591828   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.930710   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.434148   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.319949   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.337078   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.134422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.137461   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.090730   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.092555   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.928685   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.929200   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.929272   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.819461   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.819541   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.633893   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.636198   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.636373   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.590019   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.590953   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.591420   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.929488   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:35.929671   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.819661   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.322177   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.137315   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.635168   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.097607   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.590836   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:37.930820   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.930916   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:38.324332   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:40.819395   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.819784   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.640489   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.134648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.590910   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.592083   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.429717   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:44.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.431053   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.320122   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:47.819547   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.135328   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.137213   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.091979   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.093149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.929529   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:51.428177   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.319560   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.820242   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.635136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.637000   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.591430   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.090634   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:53.429307   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.429455   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.821647   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.319971   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.135608   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.137606   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.634197   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.590565   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:00.091074   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.429785   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.928834   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.818255   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.819526   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:03.635008   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.134591   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.591023   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.592260   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.092331   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.430411   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.930385   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.326885   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.822828   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:08.135379   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:10.136957   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.590114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.593478   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.434219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.929736   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.930477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.322955   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.819793   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:12.137554   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.635349   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.637857   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.092558   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.591772   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.429362   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.931219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.319867   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.325224   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.135196   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.634789   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.090842   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.591235   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.929464   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:18.326463   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:20.819839   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:22.820060   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.636879   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.135188   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.591676   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.591833   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.929811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.429286   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.319356   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.819668   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.634130   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.635441   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.591961   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.090560   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:32.091429   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.929344   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.929561   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:29.820548   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:31.820901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.134798   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.635317   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.094290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.589895   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.429811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.429995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.319447   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.822690   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.636833   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.136281   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:38.591586   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.090302   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.929337   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.428532   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:39.321656   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.820917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.635037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.135037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:43.091587   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.590322   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.429616   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.430483   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.431960   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.319403   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.326448   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.136136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:49.635013   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.635308   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.592114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:50.089825   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:52.090721   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.928619   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.429031   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.820121   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.319794   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.134872   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:54.589746   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.590432   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.429817   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:55.929211   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.820666   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.322986   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.135622   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.139553   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.592602   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:01.091154   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:57.929777   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:59.930300   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.818901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.819587   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.634488   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.636059   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:03.591886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:06.091886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.432472   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.125384   60833 pod_ready.go:81] duration metric: took 4m0.000960425s waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:05.125428   60833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:05.125437   60833 pod_ready.go:38] duration metric: took 4m2.799403108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:05.125453   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:05.125518   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:05.125592   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:05.203017   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:05.203045   60833 cri.go:89] found id: ""
	I1212 21:14:05.203054   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:05.203115   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.208622   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:05.208693   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:05.250079   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.250102   60833 cri.go:89] found id: ""
	I1212 21:14:05.250118   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:05.250161   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.254870   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:05.254946   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:05.323718   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:05.323748   60833 cri.go:89] found id: ""
	I1212 21:14:05.323757   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:05.323819   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.328832   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:05.328902   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:05.372224   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.372252   60833 cri.go:89] found id: ""
	I1212 21:14:05.372262   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:05.372316   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.377943   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:05.378007   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:05.417867   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:05.417894   60833 cri.go:89] found id: ""
	I1212 21:14:05.417905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:05.417961   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.422198   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:05.422264   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:05.462031   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:05.462052   60833 cri.go:89] found id: ""
	I1212 21:14:05.462059   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:05.462114   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.466907   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:05.466962   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:05.512557   60833 cri.go:89] found id: ""
	I1212 21:14:05.512585   60833 logs.go:284] 0 containers: []
	W1212 21:14:05.512592   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:05.512597   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:05.512663   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:05.553889   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.553914   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:05.553921   60833 cri.go:89] found id: ""
	I1212 21:14:05.553929   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:05.553982   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.558864   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.563550   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:05.563572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:05.627093   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:05.627135   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:05.642800   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:05.642827   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:05.820642   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:05.820683   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.871256   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:05.871299   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.913399   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:05.913431   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.955061   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:05.955103   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:06.012639   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:06.012681   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:06.057933   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:06.057970   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:06.110367   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:06.110400   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:06.173711   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:06.173746   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:06.214291   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:06.214328   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:06.260105   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:06.260142   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:03.320010   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.321011   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.821313   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.134137   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.635405   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:08.591048   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:10.593286   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.219373   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:09.237985   60833 api_server.go:72] duration metric: took 4m14.403294004s to wait for apiserver process to appear ...
	I1212 21:14:09.238014   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:09.238057   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:09.238119   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:09.281005   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:09.281028   60833 cri.go:89] found id: ""
	I1212 21:14:09.281037   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:09.281097   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.285354   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:09.285436   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:09.336833   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.336864   60833 cri.go:89] found id: ""
	I1212 21:14:09.336874   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:09.336937   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.342850   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:09.342928   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:09.387107   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.387133   60833 cri.go:89] found id: ""
	I1212 21:14:09.387143   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:09.387202   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.392729   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:09.392806   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:09.433197   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:09.433225   60833 cri.go:89] found id: ""
	I1212 21:14:09.433232   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:09.433281   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.438043   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:09.438092   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:09.486158   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.486185   60833 cri.go:89] found id: ""
	I1212 21:14:09.486200   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:09.486255   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.491667   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:09.491735   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:09.536085   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:09.536108   60833 cri.go:89] found id: ""
	I1212 21:14:09.536114   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:09.536165   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.540939   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:09.541008   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:09.585160   60833 cri.go:89] found id: ""
	I1212 21:14:09.585187   60833 logs.go:284] 0 containers: []
	W1212 21:14:09.585195   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:09.585200   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:09.585254   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:09.628972   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:09.629001   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:09.629008   60833 cri.go:89] found id: ""
	I1212 21:14:09.629017   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:09.629075   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.634242   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.639308   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:09.639344   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:09.766299   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:09.766329   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.816655   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:09.816699   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.863184   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:09.863212   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.924345   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:09.924382   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:10.363852   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:10.363897   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:10.417375   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:10.417407   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:10.432758   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:10.432788   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:10.483732   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:10.483778   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:10.538254   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:10.538283   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:10.598142   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:10.598174   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:10.650678   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:10.650710   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:10.697971   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:10.698000   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:10.318636   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.321917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.134600   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:14.134822   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.634845   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.091008   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:15.589901   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.241720   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:14:13.248465   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:14:13.249814   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:14:13.249839   60833 api_server.go:131] duration metric: took 4.011816395s to wait for apiserver health ...
	I1212 21:14:13.249848   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:14:13.249871   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:13.249916   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:13.300138   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:13.300161   60833 cri.go:89] found id: ""
	I1212 21:14:13.300171   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:13.300228   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.306350   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:13.306424   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:13.358644   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.358667   60833 cri.go:89] found id: ""
	I1212 21:14:13.358676   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:13.358737   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.363921   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:13.363989   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:13.413339   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:13.413366   60833 cri.go:89] found id: ""
	I1212 21:14:13.413374   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:13.413420   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.418188   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:13.418248   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:13.461495   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:13.461522   60833 cri.go:89] found id: ""
	I1212 21:14:13.461532   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:13.461581   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.465878   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:13.465951   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:13.511866   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.511895   60833 cri.go:89] found id: ""
	I1212 21:14:13.511905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:13.511960   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.516312   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:13.516381   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:13.560993   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.561023   60833 cri.go:89] found id: ""
	I1212 21:14:13.561034   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:13.561092   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.565439   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:13.565514   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:13.608401   60833 cri.go:89] found id: ""
	I1212 21:14:13.608434   60833 logs.go:284] 0 containers: []
	W1212 21:14:13.608445   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:13.608452   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:13.608507   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:13.661929   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:13.661956   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:13.661963   60833 cri.go:89] found id: ""
	I1212 21:14:13.661972   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:13.662036   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.667039   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.671770   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:13.671791   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:13.793637   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:13.793671   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.844253   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:13.844286   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.886965   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:13.886997   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.946537   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:13.946572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:13.999732   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:13.999769   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:14.015819   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:14.015849   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:14.063649   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:14.063684   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:14.116465   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:14.116499   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:14.179838   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:14.179875   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:14.224213   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:14.224243   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:14.262832   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:14.262858   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:14.307981   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:14.308008   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:17.188864   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:14:17.188919   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.188927   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.188934   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.188943   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.188950   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.188959   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.188980   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.188988   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.188996   60833 system_pods.go:74] duration metric: took 3.939142839s to wait for pod list to return data ...
	I1212 21:14:17.189005   60833 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:14:17.192352   60833 default_sa.go:45] found service account: "default"
	I1212 21:14:17.192390   60833 default_sa.go:55] duration metric: took 3.37914ms for default service account to be created ...
	I1212 21:14:17.192400   60833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:14:17.198396   60833 system_pods.go:86] 8 kube-system pods found
	I1212 21:14:17.198424   60833 system_pods.go:89] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.198429   60833 system_pods.go:89] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.198433   60833 system_pods.go:89] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.198438   60833 system_pods.go:89] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.198442   60833 system_pods.go:89] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.198446   60833 system_pods.go:89] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.198455   60833 system_pods.go:89] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.198459   60833 system_pods.go:89] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.198466   60833 system_pods.go:126] duration metric: took 6.060971ms to wait for k8s-apps to be running ...
	I1212 21:14:17.198473   60833 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:14:17.198513   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:14:17.217190   60833 system_svc.go:56] duration metric: took 18.71037ms WaitForService to wait for kubelet.
	I1212 21:14:17.217224   60833 kubeadm.go:581] duration metric: took 4m22.382539055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:14:17.217249   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:14:17.221504   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:14:17.221540   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:14:17.221555   60833 node_conditions.go:105] duration metric: took 4.300742ms to run NodePressure ...
	I1212 21:14:17.221569   60833 start.go:228] waiting for startup goroutines ...
	I1212 21:14:17.221577   60833 start.go:233] waiting for cluster config update ...
	I1212 21:14:17.221594   60833 start.go:242] writing updated cluster config ...
	I1212 21:14:17.221939   60833 ssh_runner.go:195] Run: rm -f paused
	I1212 21:14:17.277033   60833 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:14:17.279044   60833 out.go:177] * Done! kubectl is now configured to use "embed-certs-831188" cluster and "default" namespace by default
	I1212 21:14:14.818262   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.823731   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:18.634990   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.135517   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:17.593149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:20.091419   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:22.091781   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:19.320462   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.819129   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.636400   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.134084   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:24.591552   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:27.090974   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.825879   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.318691   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.135741   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:30.635812   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:29.091882   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.590150   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.819815   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.319140   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.134738   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:35.637961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.591929   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.091976   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.819872   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.325409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.139066   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.635659   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:41.090674   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.819966   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.820281   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.135071   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.635762   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.091695   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.595126   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.323343   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.819822   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.134846   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.135229   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.092328   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.591470   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.319483   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.819702   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.135550   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:54.634163   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:56.634961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.593957   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.091338   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.284411   61298 pod_ready.go:81] duration metric: took 4m0.000712263s waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:55.284453   61298 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:55.284462   61298 pod_ready.go:38] duration metric: took 4m5.170596318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:55.284486   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:55.284536   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:55.284595   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:55.345012   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:55.345043   61298 cri.go:89] found id: ""
	I1212 21:14:55.345055   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:55.345118   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.350261   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:55.350329   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:55.403088   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:55.403116   61298 cri.go:89] found id: ""
	I1212 21:14:55.403124   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:55.403169   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.408043   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:55.408103   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:55.449581   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:55.449608   61298 cri.go:89] found id: ""
	I1212 21:14:55.449615   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:55.449670   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.454762   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:55.454828   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:55.502919   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:55.502960   61298 cri.go:89] found id: ""
	I1212 21:14:55.502970   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:55.503050   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.508035   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:55.508101   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:55.550335   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:55.550365   61298 cri.go:89] found id: ""
	I1212 21:14:55.550376   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:55.550437   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.554985   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:55.555043   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:55.599678   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:55.599707   61298 cri.go:89] found id: ""
	I1212 21:14:55.599716   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:55.599772   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.604830   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:55.604913   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:55.651733   61298 cri.go:89] found id: ""
	I1212 21:14:55.651767   61298 logs.go:284] 0 containers: []
	W1212 21:14:55.651774   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:55.651779   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:55.651825   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:55.690691   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:55.690716   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.690723   61298 cri.go:89] found id: ""
	I1212 21:14:55.690732   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:55.690778   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.695227   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.699700   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:14:55.699723   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:55.751176   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:14:55.751210   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.789388   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:55.789417   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:56.270250   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:56.270296   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:56.315517   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:56.315549   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:56.377591   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:14:56.377648   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:56.432089   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:56.432124   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:56.496004   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:56.496038   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:56.543979   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:56.544010   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:56.599613   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:14:56.599644   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:56.646113   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:14:56.646146   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:56.693081   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:56.693111   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:56.709557   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:56.709591   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:53.319742   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.320811   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:57.820478   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.134092   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:01.135233   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.366965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:59.385251   61298 api_server.go:72] duration metric: took 4m16.159743319s to wait for apiserver process to appear ...
	I1212 21:14:59.385280   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:59.385317   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:59.385365   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:59.433011   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:59.433038   61298 cri.go:89] found id: ""
	I1212 21:14:59.433047   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:59.433088   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.438059   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:59.438136   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:59.477000   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.477078   61298 cri.go:89] found id: ""
	I1212 21:14:59.477093   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:59.477153   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.481716   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:59.481791   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:59.526936   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.526966   61298 cri.go:89] found id: ""
	I1212 21:14:59.526975   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:59.527037   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.535907   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:59.535985   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:59.580818   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:59.580848   61298 cri.go:89] found id: ""
	I1212 21:14:59.580856   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:59.580916   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.585685   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:59.585733   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:59.640697   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:59.640721   61298 cri.go:89] found id: ""
	I1212 21:14:59.640731   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:59.640798   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.644940   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:59.645004   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:59.687873   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.687901   61298 cri.go:89] found id: ""
	I1212 21:14:59.687910   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:59.687960   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.692382   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:59.692442   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:59.735189   61298 cri.go:89] found id: ""
	I1212 21:14:59.735225   61298 logs.go:284] 0 containers: []
	W1212 21:14:59.735235   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:59.735256   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:59.735323   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:59.778668   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:59.778702   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:59.778708   61298 cri.go:89] found id: ""
	I1212 21:14:59.778717   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:59.778773   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.782827   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.787277   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:59.787303   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:59.802470   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:59.802499   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.864191   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:59.864225   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.911007   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:59.911032   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.975894   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:59.975932   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:00.021750   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:00.021780   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:00.061527   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:00.061557   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:00.484318   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:00.484359   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:00.549321   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:00.549357   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:00.600589   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:00.600629   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:00.643660   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:00.643690   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:00.698016   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:00.698047   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:00.741819   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:00.741850   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:00.319685   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:02.320017   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.136545   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:05.635632   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.383318   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:15:03.389750   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:15:03.391084   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:15:03.391117   61298 api_server.go:131] duration metric: took 4.005829911s to wait for apiserver health ...
	I1212 21:15:03.391155   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:03.391181   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:15:03.391262   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:15:03.438733   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:03.438754   61298 cri.go:89] found id: ""
	I1212 21:15:03.438762   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:15:03.438809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.443133   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:15:03.443203   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:15:03.488960   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.488990   61298 cri.go:89] found id: ""
	I1212 21:15:03.489001   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:15:03.489058   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.493741   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:15:03.493807   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:15:03.541286   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:03.541316   61298 cri.go:89] found id: ""
	I1212 21:15:03.541325   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:15:03.541387   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.545934   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:15:03.546008   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:15:03.585937   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.585962   61298 cri.go:89] found id: ""
	I1212 21:15:03.585971   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:15:03.586039   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.590444   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:15:03.590516   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:15:03.626793   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:03.626826   61298 cri.go:89] found id: ""
	I1212 21:15:03.626835   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:15:03.626894   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.631829   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:15:03.631906   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:15:03.676728   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:03.676750   61298 cri.go:89] found id: ""
	I1212 21:15:03.676758   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:15:03.676809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.681068   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:15:03.681123   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:15:03.723403   61298 cri.go:89] found id: ""
	I1212 21:15:03.723430   61298 logs.go:284] 0 containers: []
	W1212 21:15:03.723437   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:15:03.723442   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:15:03.723502   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:15:03.772837   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.772868   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.772875   61298 cri.go:89] found id: ""
	I1212 21:15:03.772884   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:15:03.772940   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.777274   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.782354   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:15:03.782379   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.823892   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:03.823919   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.866656   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:15:03.866689   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.920757   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:03.920798   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.963737   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:03.963766   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:04.005559   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:15:04.005582   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:04.054868   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:04.054901   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:04.118941   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:04.118973   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:04.188272   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:15:04.188314   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:04.230473   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:15:04.230502   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:15:04.247069   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:04.247097   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:04.425844   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:04.425877   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:04.492751   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:04.492789   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:07.374768   61298 system_pods.go:59] 8 kube-system pods found
	I1212 21:15:07.374796   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.374801   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.374806   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.374810   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.374814   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.374818   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.374823   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.374828   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.374835   61298 system_pods.go:74] duration metric: took 3.983674471s to wait for pod list to return data ...
	I1212 21:15:07.374842   61298 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:07.377370   61298 default_sa.go:45] found service account: "default"
	I1212 21:15:07.377391   61298 default_sa.go:55] duration metric: took 2.542814ms for default service account to be created ...
	I1212 21:15:07.377400   61298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:07.384723   61298 system_pods.go:86] 8 kube-system pods found
	I1212 21:15:07.384751   61298 system_pods.go:89] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.384758   61298 system_pods.go:89] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.384767   61298 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.384776   61298 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.384782   61298 system_pods.go:89] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.384789   61298 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.384800   61298 system_pods.go:89] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.384809   61298 system_pods.go:89] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.384824   61298 system_pods.go:126] duration metric: took 7.416446ms to wait for k8s-apps to be running ...
	I1212 21:15:07.384838   61298 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:15:07.384893   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:07.402316   61298 system_svc.go:56] duration metric: took 17.466449ms WaitForService to wait for kubelet.
	I1212 21:15:07.402350   61298 kubeadm.go:581] duration metric: took 4m24.176848962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:15:07.402367   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:15:07.405566   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:15:07.405598   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:15:07.405616   61298 node_conditions.go:105] duration metric: took 3.244651ms to run NodePressure ...
	I1212 21:15:07.405628   61298 start.go:228] waiting for startup goroutines ...
	I1212 21:15:07.405637   61298 start.go:233] waiting for cluster config update ...
	I1212 21:15:07.405649   61298 start.go:242] writing updated cluster config ...
	I1212 21:15:07.405956   61298 ssh_runner.go:195] Run: rm -f paused
	I1212 21:15:07.457339   61298 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:15:07.459492   61298 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-171828" cluster and "default" namespace by default
	I1212 21:15:04.820409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:07.323802   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:08.135943   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:10.633863   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:11.829177   60948 pod_ready.go:81] duration metric: took 4m0.000566874s waiting for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:11.829231   60948 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:11.829268   60948 pod_ready.go:38] duration metric: took 4m1.1991406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:11.829314   60948 kubeadm.go:640] restartCluster took 5m11.909727716s
	W1212 21:15:11.829387   60948 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:11.829425   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:09.824487   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:12.319761   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:14.818898   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:16.822843   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:18.398899   60948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.569443116s)
	I1212 21:15:18.398988   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:18.421423   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:18.437661   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:18.459692   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:18.459747   60948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 21:15:18.529408   60948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1212 21:15:18.529485   60948 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:18.690865   60948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:18.691034   60948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:18.691165   60948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:18.939806   60948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:18.939966   60948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:18.949943   60948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1212 21:15:19.070931   60948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:19.072676   60948 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:19.072783   60948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:19.072868   60948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:19.072976   60948 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:19.073053   60948 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:19.073145   60948 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:19.073253   60948 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:19.073367   60948 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:19.073461   60948 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:19.073562   60948 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:19.073669   60948 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:19.073732   60948 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:19.073833   60948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:19.136565   60948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:19.614416   60948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:19.754535   60948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:20.149412   60948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:20.150707   60948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:20.152444   60948 out.go:204]   - Booting up control plane ...
	I1212 21:15:20.152579   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:20.158445   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:20.162012   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:20.162125   60948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:20.163852   60948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:19.321950   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:21.334725   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:23.820711   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:26.320918   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.174689   60948 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.007313 seconds
	I1212 21:15:29.174814   60948 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:29.189641   60948 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:29.715080   60948 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:29.715312   60948 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-372099 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 21:15:30.225103   60948 kubeadm.go:322] [bootstrap-token] Using token: h843b5.c34afz2u52stqeoc
	I1212 21:15:30.226707   60948 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:30.226873   60948 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:30.237412   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:30.245755   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:30.252764   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:30.259184   60948 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:30.405726   60948 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:30.647756   60948 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:30.647812   60948 kubeadm.go:322] 
	I1212 21:15:30.647908   60948 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:30.647920   60948 kubeadm.go:322] 
	I1212 21:15:30.648030   60948 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:30.648040   60948 kubeadm.go:322] 
	I1212 21:15:30.648076   60948 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:30.648155   60948 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:30.648219   60948 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:30.648229   60948 kubeadm.go:322] 
	I1212 21:15:30.648358   60948 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:30.648477   60948 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:30.648571   60948 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:30.648582   60948 kubeadm.go:322] 
	I1212 21:15:30.648698   60948 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1212 21:15:30.648813   60948 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:30.648824   60948 kubeadm.go:322] 
	I1212 21:15:30.648920   60948 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649052   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:30.649101   60948 kubeadm.go:322]     --control-plane 	  
	I1212 21:15:30.649111   60948 kubeadm.go:322] 
	I1212 21:15:30.649205   60948 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:30.649214   60948 kubeadm.go:322] 
	I1212 21:15:30.649313   60948 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649435   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:30.649933   60948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:30.649961   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:15:30.649971   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:30.651531   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:30.652689   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:30.663574   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:30.686618   60948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:30.686690   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.686692   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=old-k8s-version-372099 minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.707974   60948 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:30.909886   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.037212   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.641453   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:28.819896   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.562965   60628 pod_ready.go:81] duration metric: took 4m0.000097626s waiting for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:29.563010   60628 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:29.563041   60628 pod_ready.go:38] duration metric: took 4m10.604144973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:29.563066   60628 kubeadm.go:640] restartCluster took 4m31.813522594s
	W1212 21:15:29.563127   60628 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:29.563156   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:32.141066   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:32.640787   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.140569   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.640785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.140535   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.641063   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.140492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.640819   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.140748   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.640647   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.141492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.641109   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.140524   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.641401   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.141549   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.641304   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.141537   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.641149   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.141391   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.640949   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.000355   60628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.437170953s)
	I1212 21:15:44.000430   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:44.014718   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:44.025263   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:44.035086   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:44.035133   60628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 21:15:44.089390   60628 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 21:15:44.089499   60628 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:44.275319   60628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:44.275496   60628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:44.275594   60628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:44.529521   60628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:42.141256   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:42.640563   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.140785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.640773   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.141155   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.641415   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.140534   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.641492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.141203   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.259301   60948 kubeadm.go:1088] duration metric: took 15.572687129s to wait for elevateKubeSystemPrivileges.
	I1212 21:15:46.259339   60948 kubeadm.go:406] StartCluster complete in 5m46.398198596s
	I1212 21:15:46.259364   60948 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.259455   60948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:15:46.261128   60948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.261410   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:15:46.261582   60948 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:15:46.261654   60948 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261676   60948 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-372099"
	W1212 21:15:46.261691   60948 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:15:46.261690   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:15:46.261729   60948 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261739   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.261745   60948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-372099"
	I1212 21:15:46.262128   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262150   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262176   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262204   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262371   60948 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-372099"
	I1212 21:15:46.262388   60948 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-372099"
	W1212 21:15:46.262396   60948 addons.go:240] addon metrics-server should already be in state true
	I1212 21:15:46.262431   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.262755   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262775   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.280829   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 21:15:46.281025   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1212 21:15:46.281167   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I1212 21:15:46.281451   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.282027   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282043   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282307   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282340   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282381   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282455   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282466   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.282760   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282816   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.283348   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283365   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283377   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.283388   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.286570   60948 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-372099"
	W1212 21:15:46.286591   60948 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:15:46.286618   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.287021   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.287041   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.300740   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1212 21:15:46.301674   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.301993   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I1212 21:15:46.302303   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.302317   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.302667   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.302772   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.302940   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.303112   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.303127   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.303537   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.304537   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.306285   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.308411   60948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:15:46.307398   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1212 21:15:46.307432   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.310694   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:15:46.310717   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:15:46.310737   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.311358   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.312839   60948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:15:44.530987   60628 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:44.531136   60628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:44.531267   60628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:44.531359   60628 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:44.531879   60628 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:44.532386   60628 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:44.533944   60628 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:44.535037   60628 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:44.536175   60628 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:44.537226   60628 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:44.537964   60628 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:44.538451   60628 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:44.538551   60628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:44.841462   60628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:45.059424   60628 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:15:45.613097   60628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:46.221274   60628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:46.372266   60628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:46.373199   60628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:46.376094   60628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:46.311872   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.314010   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.314158   60948 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.314170   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:15:46.314187   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.314387   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.314450   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.314958   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.314985   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.315221   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.315264   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.315563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.315745   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.315925   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.316191   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.322472   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324106   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.324142   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324390   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.324651   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.324861   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.325008   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.339982   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I1212 21:15:46.340365   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.340889   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.340915   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.341242   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.341434   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.343069   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.343366   60948 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.343384   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:15:46.343402   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.346212   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346596   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.346626   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346882   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.347322   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.347482   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.347618   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	W1212 21:15:46.380698   60948 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-372099" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:15:46.380724   60948 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1212 21:15:46.380745   60948 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:15:46.383175   60948 out.go:177] * Verifying Kubernetes components...
	I1212 21:15:46.384789   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:46.518292   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:15:46.518316   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:15:46.519393   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.554663   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.580810   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:15:46.580839   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:15:46.614409   60948 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.614501   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:15:46.628267   60948 node_ready.go:49] node "old-k8s-version-372099" has status "Ready":"True"
	I1212 21:15:46.628302   60948 node_ready.go:38] duration metric: took 13.858882ms waiting for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.628318   60948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:46.651927   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:46.651957   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:15:46.655191   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:46.734455   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:47.462832   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462859   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.462837   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462930   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465016   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465028   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465047   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465057   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465066   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465018   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465027   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465126   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465143   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465155   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465440   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465459   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465460   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465477   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465462   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465509   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.509931   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.509955   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.510242   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.510268   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.510289   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.529296   60948 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 21:15:47.740624   60948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006125978s)
	I1212 21:15:47.740686   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.740704   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.741066   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741082   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741104   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.741117   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741344   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741370   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741380   60948 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-372099"
	I1212 21:15:47.741382   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.743094   60948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:15:46.377620   60628 out.go:204]   - Booting up control plane ...
	I1212 21:15:46.377753   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:46.380316   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:46.381669   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:46.400406   60628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:46.401911   60628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:46.402016   60628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 21:15:46.577916   60628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:47.744911   60948 addons.go:502] enable addons completed in 1.483323446s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:15:48.879924   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:51.240011   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.081961   60628 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503798 seconds
	I1212 21:15:55.108753   60628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:55.132442   60628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:55.675426   60628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:55.675616   60628 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-343495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:15:56.197198   60628 kubeadm.go:322] [bootstrap-token] Using token: 6e6rca.dj99vsq9tzjoif3m
	I1212 21:15:56.198596   60628 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:56.198756   60628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:56.204758   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:15:56.217506   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:56.221482   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:56.225791   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:56.231024   60628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:56.249696   60628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:15:56.516070   60628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:56.613203   60628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:56.613227   60628 kubeadm.go:322] 
	I1212 21:15:56.613315   60628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:56.613340   60628 kubeadm.go:322] 
	I1212 21:15:56.613432   60628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:56.613447   60628 kubeadm.go:322] 
	I1212 21:15:56.613501   60628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:56.613588   60628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:56.613671   60628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:56.613682   60628 kubeadm.go:322] 
	I1212 21:15:56.613755   60628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 21:15:56.613762   60628 kubeadm.go:322] 
	I1212 21:15:56.613822   60628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:15:56.613832   60628 kubeadm.go:322] 
	I1212 21:15:56.613903   60628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:56.614004   60628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:56.614104   60628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:56.614116   60628 kubeadm.go:322] 
	I1212 21:15:56.614244   60628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:15:56.614369   60628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:56.614388   60628 kubeadm.go:322] 
	I1212 21:15:56.614507   60628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614653   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:56.614682   60628 kubeadm.go:322] 	--control-plane 
	I1212 21:15:56.614689   60628 kubeadm.go:322] 
	I1212 21:15:56.614787   60628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:56.614797   60628 kubeadm.go:322] 
	I1212 21:15:56.614865   60628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614993   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:56.616155   60628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:56.616184   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:15:56.616197   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:56.618787   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:53.240376   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.738865   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:56.620193   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:56.653642   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:56.701431   60628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:56.701520   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.701521   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=no-preload-343495 minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.765645   60628 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:57.021925   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.162627   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.772366   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.239852   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.239881   60948 pod_ready.go:81] duration metric: took 10.584655594s waiting for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.239895   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245919   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.245943   60948 pod_ready.go:81] duration metric: took 6.039649ms waiting for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245955   60948 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251905   60948 pod_ready.go:92] pod "kube-proxy-vzqkz" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.251933   60948 pod_ready.go:81] duration metric: took 5.969732ms waiting for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251943   60948 pod_ready.go:38] duration metric: took 10.623613273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:57.251963   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:15:57.252021   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:15:57.271808   60948 api_server.go:72] duration metric: took 10.891018678s to wait for apiserver process to appear ...
	I1212 21:15:57.271834   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:15:57.271853   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:15:57.279544   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:15:57.280373   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:15:57.280393   60948 api_server.go:131] duration metric: took 8.55283ms to wait for apiserver health ...
	I1212 21:15:57.280401   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:57.284489   60948 system_pods.go:59] 5 kube-system pods found
	I1212 21:15:57.284516   60948 system_pods.go:61] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.284520   60948 system_pods.go:61] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.284525   60948 system_pods.go:61] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.284531   60948 system_pods.go:61] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.284535   60948 system_pods.go:61] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.284542   60948 system_pods.go:74] duration metric: took 4.136571ms to wait for pod list to return data ...
	I1212 21:15:57.284549   60948 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:57.288616   60948 default_sa.go:45] found service account: "default"
	I1212 21:15:57.288643   60948 default_sa.go:55] duration metric: took 4.087698ms for default service account to be created ...
	I1212 21:15:57.288653   60948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:57.292785   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.292807   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.292812   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.292816   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.292822   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.292827   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.292842   60948 retry.go:31] will retry after 207.544988ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.505885   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.505911   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.505917   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.505921   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.505928   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.505932   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.505949   60948 retry.go:31] will retry after 367.076908ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.878466   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.878501   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.878509   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.878514   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.878520   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.878527   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.878547   60948 retry.go:31] will retry after 381.308829ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.264211   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.264237   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.264243   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.264247   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.264256   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.264262   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.264290   60948 retry.go:31] will retry after 366.461937ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.638206   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.638229   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.638234   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.638238   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.638245   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.638249   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.638276   60948 retry.go:31] will retry after 512.413163ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.156233   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.156263   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.156268   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.156272   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.156279   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.156284   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.156301   60948 retry.go:31] will retry after 775.973999ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.937928   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.937958   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.937966   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.937973   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.937983   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.937990   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.938009   60948 retry.go:31] will retry after 831.74396ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:00.775403   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:00.775427   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:00.775432   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:00.775436   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:00.775442   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:00.775447   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:00.775461   60948 retry.go:31] will retry after 1.069326929s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:01.849879   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:01.849906   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:01.849911   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:01.849915   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:01.849922   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:01.849927   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:01.849944   60948 retry.go:31] will retry after 1.540430535s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.271568   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:58.772443   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.271781   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.771732   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.272235   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.771891   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.271870   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.772445   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.271997   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.772496   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.395395   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:03.395421   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:03.395427   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:03.395431   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:03.395437   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:03.395442   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:03.395458   60948 retry.go:31] will retry after 2.25868002s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:05.661953   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:05.661988   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:05.661997   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:05.662005   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:05.662016   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:05.662026   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:05.662047   60948 retry.go:31] will retry after 2.893719866s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:03.272067   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.771992   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.272187   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.772518   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.272480   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.772460   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.272463   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.772291   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.271662   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.772063   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.272491   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.414409   60628 kubeadm.go:1088] duration metric: took 11.712956328s to wait for elevateKubeSystemPrivileges.
	I1212 21:16:08.414452   60628 kubeadm.go:406] StartCluster complete in 5m10.714058162s
	I1212 21:16:08.414480   60628 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.414582   60628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:16:08.417772   60628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.418132   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:16:08.418167   60628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:16:08.418267   60628 addons.go:69] Setting storage-provisioner=true in profile "no-preload-343495"
	I1212 21:16:08.418281   60628 addons.go:69] Setting default-storageclass=true in profile "no-preload-343495"
	I1212 21:16:08.418289   60628 addons.go:231] Setting addon storage-provisioner=true in "no-preload-343495"
	W1212 21:16:08.418297   60628 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:16:08.418301   60628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-343495"
	I1212 21:16:08.418310   60628 addons.go:69] Setting metrics-server=true in profile "no-preload-343495"
	I1212 21:16:08.418344   60628 addons.go:231] Setting addon metrics-server=true in "no-preload-343495"
	I1212 21:16:08.418349   60628 host.go:66] Checking if "no-preload-343495" exists ...
	W1212 21:16:08.418353   60628 addons.go:240] addon metrics-server should already be in state true
	I1212 21:16:08.418367   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:16:08.418401   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418810   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418850   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.437816   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I1212 21:16:08.438320   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.438921   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.438945   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.439225   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I1212 21:16:08.439418   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.439740   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.439809   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1212 21:16:08.440064   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.440092   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.440471   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.440491   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.440499   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.440887   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.440978   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.441002   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.441399   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.441442   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.441724   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.441960   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.446221   60628 addons.go:231] Setting addon default-storageclass=true in "no-preload-343495"
	W1212 21:16:08.446247   60628 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:16:08.446276   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.446655   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.446690   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.456479   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1212 21:16:08.456883   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.457330   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.457343   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.457784   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.457958   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.459741   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.461624   60628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:16:08.462951   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:16:08.462963   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:16:08.462978   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.462595   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1212 21:16:08.463831   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.464424   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.464443   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.465295   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.465627   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.467919   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468652   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.468681   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468905   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.469083   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.469197   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.469296   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.472614   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.474536   60628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:16:08.475957   60628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.475976   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:16:08.475995   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.476821   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I1212 21:16:08.477241   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.477772   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.477796   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.478322   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.479408   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.479457   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.479725   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480262   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.480285   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480565   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.480760   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.480909   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.481087   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.496182   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1212 21:16:08.496703   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.497250   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.497275   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.497705   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.497959   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.499696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.500049   60628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.500071   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:16:08.500098   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.503216   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503689   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.503717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503979   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.504187   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.504348   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.504521   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.519292   60628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-343495" context rescaled to 1 replicas
	I1212 21:16:08.519324   60628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:16:08.521243   60628 out.go:177] * Verifying Kubernetes components...
	I1212 21:16:08.522602   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:08.637693   60628 node_ready.go:35] waiting up to 6m0s for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.638072   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:16:08.640594   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:16:08.640620   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:16:08.645008   60628 node_ready.go:49] node "no-preload-343495" has status "Ready":"True"
	I1212 21:16:08.645041   60628 node_ready.go:38] duration metric: took 7.313798ms waiting for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.645056   60628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.650650   60628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658528   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.658556   60628 pod_ready.go:81] duration metric: took 7.881265ms waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658569   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682938   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.682962   60628 pod_ready.go:81] duration metric: took 24.384424ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682975   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.683220   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.688105   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:16:08.688131   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:16:08.695007   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.695034   60628 pod_ready.go:81] duration metric: took 12.050101ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.695046   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701206   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.701230   60628 pod_ready.go:81] duration metric: took 6.174333ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701240   60628 pod_ready.go:38] duration metric: took 56.165354ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.701262   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:16:08.701321   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:16:08.744650   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.758415   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:08.758444   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:16:08.841030   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:09.387385   60628 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 21:16:10.224475   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541186317s)
	I1212 21:16:10.224515   60628 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.523170366s)
	I1212 21:16:10.224548   60628 api_server.go:72] duration metric: took 1.705201863s to wait for apiserver process to appear ...
	I1212 21:16:10.224561   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:16:10.224571   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.479890747s)
	I1212 21:16:10.224606   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224579   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:16:10.224621   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.224522   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224686   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225001   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225050   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225065   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225074   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225011   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225019   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225020   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225115   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225130   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225140   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225347   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225358   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225507   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225572   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225600   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.233359   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:16:10.237567   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:16:10.237593   60628 api_server.go:131] duration metric: took 13.024501ms to wait for apiserver health ...
	I1212 21:16:10.237602   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:16:10.268851   60628 system_pods.go:59] 9 kube-system pods found
	I1212 21:16:10.268891   60628 system_pods.go:61] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268903   60628 system_pods.go:61] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268912   60628 system_pods.go:61] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.268920   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.268927   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.268936   60628 system_pods.go:61] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.268943   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.268953   60628 system_pods.go:61] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.268963   60628 system_pods.go:61] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending
	I1212 21:16:10.268971   60628 system_pods.go:74] duration metric: took 31.361836ms to wait for pod list to return data ...
	I1212 21:16:10.268987   60628 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:16:10.270947   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.270971   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.271270   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.271290   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.280134   60628 default_sa.go:45] found service account: "default"
	I1212 21:16:10.280159   60628 default_sa.go:55] duration metric: took 11.163534ms for default service account to be created ...
	I1212 21:16:10.280169   60628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:16:10.314822   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.314864   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314873   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314879   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.314886   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.314893   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.314903   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.314912   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.314923   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.314937   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.314957   60628 retry.go:31] will retry after 284.074155ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.328798   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.487713481s)
	I1212 21:16:10.328851   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.328866   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329251   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329276   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329276   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.329291   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.329304   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329540   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329556   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329566   60628 addons.go:467] Verifying addon metrics-server=true in "no-preload-343495"
	I1212 21:16:10.332474   60628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:16:08.563361   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:08.563393   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:08.563401   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:08.563408   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:08.563420   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:08.563427   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:08.563449   60948 retry.go:31] will retry after 2.871673075s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:11.441932   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:11.441970   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:11.441977   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:11.441983   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:11.441993   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.442003   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:11.442022   60948 retry.go:31] will retry after 3.977150615s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:10.333924   60628 addons.go:502] enable addons completed in 1.915760025s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:16:10.616684   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.616724   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616739   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616748   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.616757   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.616764   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.616775   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.616785   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.616795   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.616807   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.616825   60628 retry.go:31] will retry after 291.662068ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.919064   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.919104   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919114   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919125   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.919135   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.919142   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.919152   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.919160   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.919211   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.919229   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.919259   60628 retry.go:31] will retry after 381.992278ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.312083   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.312115   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.312121   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.312128   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.312137   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.312146   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.312152   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.312162   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.312170   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.312189   60628 retry.go:31] will retry after 495.705235ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.820167   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.820200   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.820205   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.820212   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.820217   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.820222   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.820226   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.820232   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.820237   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.820254   60628 retry.go:31] will retry after 635.810888ms: missing components: kube-dns, kube-proxy
	I1212 21:16:12.464096   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:12.464139   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Running
	I1212 21:16:12.464145   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:12.464149   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:12.464154   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:12.464158   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Running
	I1212 21:16:12.464162   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:12.464168   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:12.464176   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Running
	I1212 21:16:12.464185   60628 system_pods.go:126] duration metric: took 2.184010512s to wait for k8s-apps to be running ...
	I1212 21:16:12.464192   60628 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:12.464272   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:12.480090   60628 system_svc.go:56] duration metric: took 15.887114ms WaitForService to wait for kubelet.
	I1212 21:16:12.480124   60628 kubeadm.go:581] duration metric: took 3.960778694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:12.480163   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:12.483564   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:12.483589   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:12.483601   60628 node_conditions.go:105] duration metric: took 3.433071ms to run NodePressure ...
	I1212 21:16:12.483612   60628 start.go:228] waiting for startup goroutines ...
	I1212 21:16:12.483617   60628 start.go:233] waiting for cluster config update ...
	I1212 21:16:12.483626   60628 start.go:242] writing updated cluster config ...
	I1212 21:16:12.483887   60628 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:12.534680   60628 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 21:16:12.536622   60628 out.go:177] * Done! kubectl is now configured to use "no-preload-343495" cluster and "default" namespace by default
	I1212 21:16:15.424662   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:15.424691   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:15.424697   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:15.424701   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:15.424707   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:15.424712   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:15.424728   60948 retry.go:31] will retry after 4.920488737s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:20.351078   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:20.351107   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:20.351112   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:20.351116   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:20.351122   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:20.351127   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:20.351143   60948 retry.go:31] will retry after 5.718245097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:26.077073   60948 system_pods.go:86] 6 kube-system pods found
	I1212 21:16:26.077097   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:26.077103   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:26.077107   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Pending
	I1212 21:16:26.077111   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:26.077117   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:26.077122   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:26.077139   60948 retry.go:31] will retry after 8.251519223s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:34.334757   60948 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:34.334782   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:34.334787   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:34.334791   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:34.334796   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Pending
	I1212 21:16:34.334799   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:34.334804   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:34.334811   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:34.334815   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:34.334830   60948 retry.go:31] will retry after 8.584990669s: missing components: kube-apiserver, kube-scheduler
	I1212 21:16:42.927591   60948 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:42.927618   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:42.927624   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:42.927628   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:42.927632   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Running
	I1212 21:16:42.927637   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:42.927642   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:42.927647   60948 system_pods.go:89] "kube-scheduler-old-k8s-version-372099" [0e3e4e58-289f-47f1-999b-8fd87b90558a] Running
	I1212 21:16:42.927653   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:42.927658   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:42.927667   60948 system_pods.go:126] duration metric: took 45.639007967s to wait for k8s-apps to be running ...
	I1212 21:16:42.927673   60948 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:42.927715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:42.948680   60948 system_svc.go:56] duration metric: took 20.9943ms WaitForService to wait for kubelet.
	I1212 21:16:42.948711   60948 kubeadm.go:581] duration metric: took 56.56793182s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:42.948735   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:42.952462   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:42.952493   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:42.952505   60948 node_conditions.go:105] duration metric: took 3.763543ms to run NodePressure ...
	I1212 21:16:42.952518   60948 start.go:228] waiting for startup goroutines ...
	I1212 21:16:42.952527   60948 start.go:233] waiting for cluster config update ...
	I1212 21:16:42.952541   60948 start.go:242] writing updated cluster config ...
	I1212 21:16:42.952847   60948 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:43.001964   60948 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 21:16:43.003962   60948 out.go:177] 
	W1212 21:16:43.005327   60948 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 21:16:43.006827   60948 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 21:16:43.008259   60948 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-372099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:09:39 UTC, ends at Tue 2023-12-12 21:25:44 UTC. --
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.734615816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416344734595791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=efe7863b-7fdb-44ca-acde-54925aa1ecd9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.735364683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=af38981f-9350-44f0-a292-965484f95df7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.735502417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=af38981f-9350-44f0-a292-965484f95df7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.735697187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=af38981f-9350-44f0-a292-965484f95df7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.785510672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6960f2f5-ea11-41e3-a331-c5a5b2ed01d8 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.785619607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6960f2f5-ea11-41e3-a331-c5a5b2ed01d8 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.788003293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=78631979-d9f8-4e09-a958-d71ce384d1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.788380958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416344788367278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=78631979-d9f8-4e09-a958-d71ce384d1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.789303756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e4aa6ed-1f0b-4e2d-a152-cec5f25d48c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.789377985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e4aa6ed-1f0b-4e2d-a152-cec5f25d48c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.789613319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e4aa6ed-1f0b-4e2d-a152-cec5f25d48c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.820713045Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{},},Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=e7d5ea60-6e90-4ddd-9e1e-44c04903dbc7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.820858816Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:30" id=e7d5ea60-6e90-4ddd-9e1e-44c04903dbc7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.821013974Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\"" file="storage/storage_transport.go:185"
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.821118310Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:147"
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.821186278Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:47" id=e7d5ea60-6e90-4ddd-9e1e-44c04903dbc7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.821207793Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:90" id=e7d5ea60-6e90-4ddd-9e1e-44c04903dbc7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.821230346Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e7d5ea60-6e90-4ddd-9e1e-44c04903dbc7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.829448940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=11f8731a-245f-48fc-ae65-fd9f04bffa3f name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.829533521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=11f8731a-245f-48fc-ae65-fd9f04bffa3f name=/runtime.v1.RuntimeService/Version
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.830991944Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5c2b600b-24dc-4776-b2aa-a9d816e87700 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.831599323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416344831579200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=5c2b600b-24dc-4776-b2aa-a9d816e87700 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.832208649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e635f08b-60ae-4cf9-a822-e65b2972af63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.832294930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e635f08b-60ae-4cf9-a822-e65b2972af63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:25:44 old-k8s-version-372099 crio[715]: time="2023-12-12 21:25:44.832551890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e635f08b-60ae-4cf9-a822-e65b2972af63 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a86d4c17d7119       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   bf65a58303fb9       storage-provisioner
	d830469561f0e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   9 minutes ago       Running             kube-proxy                0                   75407785556de       kube-proxy-vzqkz
	6084f17e07859       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   9 minutes ago       Running             coredns                   0                   50d0568642377       coredns-5644d7b6d9-bd52f
	b1b2934a77797       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   9 minutes ago       Running             coredns                   0                   4bdcafdd6af6f       coredns-5644d7b6d9-cn5ch
	b1729474db3e1       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   bb301472398ec       etcd-old-k8s-version-372099
	984fd725da2f0       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   1970b67fe4375       kube-scheduler-old-k8s-version-372099
	99f95412173dd       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   64cf126e7e9ad       kube-controller-manager-old-k8s-version-372099
	7efe2c7c23a8f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   55f427319ae8c       kube-apiserver-old-k8s-version-372099
	457b7a6cb9832       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   55f427319ae8c       kube-apiserver-old-k8s-version-372099
	
	
	==> coredns [6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f] <==
	.:53
	2023-12-12T21:15:49.277Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-12T21:15:49.277Z [INFO] CoreDNS-1.6.2
	2023-12-12T21:15:49.277Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-12T21:15:49.296Z [INFO] 127.0.0.1:45076 - 57804 "HINFO IN 4110017162655409842.5650151957092772318. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017309537s
	
	
	==> coredns [b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249] <==
	.:53
	2023-12-12T21:15:49.174Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-12T21:15:49.175Z [INFO] CoreDNS-1.6.2
	2023-12-12T21:15:49.175Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-12T21:15:49.188Z [INFO] 127.0.0.1:43809 - 5847 "HINFO IN 7768456833403375853.4299604537471653683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015274703s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-372099
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-372099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=old-k8s-version-372099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:15:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:25:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:25:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:25:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:25:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    old-k8s-version-372099
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 fc3e555bcb6b471382a2733409d8eed0
	 System UUID:                fc3e555b-cb6b-4713-82a2-733409d8eed0
	 Boot ID:                    86498489-2351-495d-9062-a47090f2d467
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-bd52f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m59s
	  kube-system                coredns-5644d7b6d9-cn5ch                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m59s
	  kube-system                etcd-old-k8s-version-372099                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                kube-apiserver-old-k8s-version-372099             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                kube-controller-manager-old-k8s-version-372099    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-proxy-vzqkz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                kube-scheduler-old-k8s-version-372099             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                metrics-server-74d5856cc6-7bvqn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 10m                kubelet, old-k8s-version-372099     Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet, old-k8s-version-372099     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-372099     Node old-k8s-version-372099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-372099     Node old-k8s-version-372099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-372099     Node old-k8s-version-372099 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m56s              kube-proxy, old-k8s-version-372099  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec12 21:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068504] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.746890] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.558480] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153277] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.442557] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.119154] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.118210] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.160906] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.121507] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.236518] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Dec12 21:10] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[  +0.428968] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.952906] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.277966] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 21:15] systemd-fstab-generator[3139]: Ignoring "noauto" for root device
	[ +29.481995] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.543941] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287] <==
	2023-12-12 21:15:22.464407 I | raft: f9de38f1a7e06692 became follower at term 0
	2023-12-12 21:15:22.464462 I | raft: newRaft f9de38f1a7e06692 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-12 21:15:22.464486 I | raft: f9de38f1a7e06692 became follower at term 1
	2023-12-12 21:15:22.475212 W | auth: simple token is not cryptographically signed
	2023-12-12 21:15:22.488048 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-12 21:15:22.490302 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 21:15:22.490560 I | embed: listening for metrics on http://192.168.39.202:2381
	2023-12-12 21:15:22.490992 I | etcdserver: f9de38f1a7e06692 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-12 21:15:22.491475 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 21:15:22.491896 I | etcdserver/membership: added member f9de38f1a7e06692 [https://192.168.39.202:2380] to cluster e4e52c0b9ecc5e15
	2023-12-12 21:15:22.565099 I | raft: f9de38f1a7e06692 is starting a new election at term 1
	2023-12-12 21:15:22.565381 I | raft: f9de38f1a7e06692 became candidate at term 2
	2023-12-12 21:15:22.565513 I | raft: f9de38f1a7e06692 received MsgVoteResp from f9de38f1a7e06692 at term 2
	2023-12-12 21:15:22.565622 I | raft: f9de38f1a7e06692 became leader at term 2
	2023-12-12 21:15:22.565646 I | raft: raft.node: f9de38f1a7e06692 elected leader f9de38f1a7e06692 at term 2
	2023-12-12 21:15:22.566299 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-12 21:15:22.567569 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-12 21:15:22.568540 I | etcdserver: published {Name:old-k8s-version-372099 ClientURLs:[https://192.168.39.202:2379]} to cluster e4e52c0b9ecc5e15
	2023-12-12 21:15:22.568679 I | embed: ready to serve client requests
	2023-12-12 21:15:22.572099 I | embed: serving client requests on 192.168.39.202:2379
	2023-12-12 21:15:22.572412 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-12 21:15:22.572665 I | embed: ready to serve client requests
	2023-12-12 21:15:22.583507 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 21:25:22.604253 I | mvcc: store.index: compact 679
	2023-12-12 21:25:22.606982 I | mvcc: finished scheduled compaction at 679 (took 1.985822ms)
	
	
	==> kernel <==
	 21:25:45 up 16 min,  0 users,  load average: 0.00, 0.05, 0.06
	Linux old-k8s-version-372099 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b] <==
	W1212 21:15:18.186727       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.194111       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.202128       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.209448       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.224405       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.227851       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.247326       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.265874       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.277224       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.279283       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.288240       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.305606       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.307576       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.317120       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.325137       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.329464       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.329568       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.350643       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.360441       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.366709       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.367089       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.367093       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.376956       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.384826       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.395548       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f] <==
	I1212 21:18:49.408680       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:18:49.409117       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:18:49.409244       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:18:49.409278       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:20:26.898919       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:20:26.899295       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:20:26.899389       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:20:26.899417       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:21:26.900032       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:21:26.900369       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:21:26.900491       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:21:26.900623       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:23:26.901434       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:23:26.901896       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:23:26.902014       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:23:26.902045       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:25:26.903063       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:25:26.903177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:25:26.903237       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:25:26.903268       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42] <==
	E1212 21:19:18.152190       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:19:30.394211       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:19:48.404561       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:20:02.396633       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:20:18.657002       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:20:34.399154       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:20:48.909493       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:21:06.401135       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:21:19.161496       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:21:38.403105       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:21:49.413725       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:22:10.405607       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:22:19.665727       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:22:42.408001       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:22:49.917915       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:23:14.410402       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:23:20.170829       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:23:46.412738       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:23:50.423033       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:24:18.416019       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:24:20.675342       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:24:50.418323       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:24:50.927479       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1212 21:25:21.179369       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:25:22.420487       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d] <==
	W1212 21:15:49.532549       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1212 21:15:49.558088       1 node.go:135] Successfully retrieved node IP: 192.168.39.202
	I1212 21:15:49.558235       1 server_others.go:149] Using iptables Proxier.
	I1212 21:15:49.560810       1 server.go:529] Version: v1.16.0
	I1212 21:15:49.564621       1 config.go:313] Starting service config controller
	I1212 21:15:49.564683       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1212 21:15:49.562710       1 config.go:131] Starting endpoints config controller
	I1212 21:15:49.564738       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1212 21:15:49.666331       1 shared_informer.go:204] Caches are synced for service config 
	I1212 21:15:49.676990       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3] <==
	I1212 21:15:25.898220       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1212 21:15:25.957187       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 21:15:25.965456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:15:25.965580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:15:25.965658       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 21:15:25.965903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 21:15:25.966403       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:15:25.968129       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:15:25.968282       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:25.968694       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 21:15:25.972375       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:25.972472       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:26.963666       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 21:15:26.968920       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:15:26.974126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:15:26.975156       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 21:15:26.978351       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 21:15:26.979697       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:15:26.982620       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:15:26.983417       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:26.984608       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 21:15:26.987867       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:26.988559       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:46.320945       1 factory.go:585] pod is already present in the activeQ
	E1212 21:15:46.445012       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:09:39 UTC, ends at Tue 2023-12-12 21:25:45 UTC. --
	Dec 12 21:21:18 old-k8s-version-372099 kubelet[3145]: E1212 21:21:18.822124    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:21:33 old-k8s-version-372099 kubelet[3145]: E1212 21:21:33.836829    3145 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:21:33 old-k8s-version-372099 kubelet[3145]: E1212 21:21:33.836913    3145 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:21:33 old-k8s-version-372099 kubelet[3145]: E1212 21:21:33.836972    3145 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:21:33 old-k8s-version-372099 kubelet[3145]: E1212 21:21:33.837015    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 12 21:21:47 old-k8s-version-372099 kubelet[3145]: E1212 21:21:47.822122    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:21:59 old-k8s-version-372099 kubelet[3145]: E1212 21:21:59.822506    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:22:10 old-k8s-version-372099 kubelet[3145]: E1212 21:22:10.822852    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:22:25 old-k8s-version-372099 kubelet[3145]: E1212 21:22:25.822143    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:22:40 old-k8s-version-372099 kubelet[3145]: E1212 21:22:40.821145    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:22:55 old-k8s-version-372099 kubelet[3145]: E1212 21:22:55.821621    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:23:08 old-k8s-version-372099 kubelet[3145]: E1212 21:23:08.822241    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:23:21 old-k8s-version-372099 kubelet[3145]: E1212 21:23:21.823061    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:23:33 old-k8s-version-372099 kubelet[3145]: E1212 21:23:33.821589    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:23:48 old-k8s-version-372099 kubelet[3145]: E1212 21:23:48.822077    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:24:03 old-k8s-version-372099 kubelet[3145]: E1212 21:24:03.822724    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:24:15 old-k8s-version-372099 kubelet[3145]: E1212 21:24:15.821359    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:24:28 old-k8s-version-372099 kubelet[3145]: E1212 21:24:28.821464    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:24:40 old-k8s-version-372099 kubelet[3145]: E1212 21:24:40.821486    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:24:52 old-k8s-version-372099 kubelet[3145]: E1212 21:24:52.822513    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:05 old-k8s-version-372099 kubelet[3145]: E1212 21:25:05.821346    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:19 old-k8s-version-372099 kubelet[3145]: E1212 21:25:19.825652    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:19 old-k8s-version-372099 kubelet[3145]: E1212 21:25:19.922328    3145 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 12 21:25:32 old-k8s-version-372099 kubelet[3145]: E1212 21:25:32.821961    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:44 old-k8s-version-372099 kubelet[3145]: E1212 21:25:44.821469    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c] <==
	I1212 21:15:49.687955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:15:49.704387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:15:49.704513       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:15:49.717483       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:15:49.719760       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-372099_9c373048-b63a-4f19-8ac7-5f4a944596ed!
	I1212 21:15:49.719609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"844a18be-5145-4e70-9a82-93e0dff5efba", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-372099_9c373048-b63a-4f19-8ac7-5f4a944596ed became leader
	I1212 21:15:49.824533       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-372099_9c373048-b63a-4f19-8ac7-5f4a944596ed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-372099 -n old-k8s-version-372099
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-372099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-7bvqn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-372099 describe pod metrics-server-74d5856cc6-7bvqn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-372099 describe pod metrics-server-74d5856cc6-7bvqn: exit status 1 (69.708522ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-7bvqn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-372099 describe pod metrics-server-74d5856cc6-7bvqn: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (459.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:23:45.800675   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:23:56.433199   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831188 -n embed-certs-831188
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:30:58.300021051 +0000 UTC m=+5659.520193621
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-831188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-831188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-831188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-831188 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-831188 logs -n 25: (1.302965405s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:29 UTC | 12 Dec 23 21:29 UTC |
	| start   | -p newest-cni-422706 --memory=2200 --alsologtostderr   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:29 UTC | 12 Dec 23 21:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	| addons  | enable metrics-server -p newest-cni-422706             | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-422706                                   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-422706                  | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-422706 --memory=2200 --alsologtostderr   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:30:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:30:43.504910   67309 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:30:43.505160   67309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:43.505170   67309 out.go:309] Setting ErrFile to fd 2...
	I1212 21:30:43.505175   67309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:43.505410   67309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:30:43.505986   67309 out.go:303] Setting JSON to false
	I1212 21:30:43.506958   67309 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7998,"bootTime":1702408646,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:30:43.507020   67309 start.go:138] virtualization: kvm guest
	I1212 21:30:43.508833   67309 out.go:177] * [newest-cni-422706] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:30:43.510707   67309 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:30:43.510727   67309 notify.go:220] Checking for updates...
	I1212 21:30:43.512011   67309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:30:43.513451   67309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:30:43.514693   67309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:30:43.516637   67309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:30:43.518173   67309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:30:43.520106   67309 config.go:182] Loaded profile config "newest-cni-422706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:30:43.520689   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:30:43.520744   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:30:43.535079   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I1212 21:30:43.535472   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:30:43.536043   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:30:43.536068   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:30:43.536493   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:30:43.536685   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:43.536926   67309 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:30:43.537208   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:30:43.537239   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:30:43.551973   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I1212 21:30:43.552366   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:30:43.552898   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:30:43.552928   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:30:43.553258   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:30:43.553444   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:43.590100   67309 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:30:43.591354   67309 start.go:298] selected driver: kvm2
	I1212 21:30:43.591371   67309 start.go:902] validating driver "kvm2" against &{Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:30:43.591483   67309 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:30:43.592410   67309 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:43.592519   67309 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:30:43.609387   67309 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:30:43.609771   67309 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:30:43.609832   67309 cni.go:84] Creating CNI manager for ""
	I1212 21:30:43.609846   67309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:30:43.609858   67309 start_flags.go:323] config:
	{Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:30:43.609987   67309 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:43.611680   67309 out.go:177] * Starting control plane node newest-cni-422706 in cluster newest-cni-422706
	I1212 21:30:43.613027   67309 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:30:43.613067   67309 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 21:30:43.613089   67309 cache.go:56] Caching tarball of preloaded images
	I1212 21:30:43.613194   67309 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:30:43.613215   67309 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 21:30:43.613343   67309 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/config.json ...
	I1212 21:30:43.613528   67309 start.go:365] acquiring machines lock for newest-cni-422706: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:30:43.613572   67309 start.go:369] acquired machines lock for "newest-cni-422706" in 25.963µs
	I1212 21:30:43.613589   67309 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:30:43.613597   67309 fix.go:54] fixHost starting: 
	I1212 21:30:43.613866   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:30:43.613907   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:30:43.627489   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I1212 21:30:43.627963   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:30:43.628504   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:30:43.628526   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:30:43.628826   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:30:43.629060   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:43.629257   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:30:43.630824   67309 fix.go:102] recreateIfNeeded on newest-cni-422706: state=Stopped err=<nil>
	I1212 21:30:43.630863   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	W1212 21:30:43.631026   67309 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:30:43.633819   67309 out.go:177] * Restarting existing kvm2 VM for "newest-cni-422706" ...
	I1212 21:30:43.635625   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Start
	I1212 21:30:43.635850   67309 main.go:141] libmachine: (newest-cni-422706) Ensuring networks are active...
	I1212 21:30:43.636650   67309 main.go:141] libmachine: (newest-cni-422706) Ensuring network default is active
	I1212 21:30:43.636963   67309 main.go:141] libmachine: (newest-cni-422706) Ensuring network mk-newest-cni-422706 is active
	I1212 21:30:43.637262   67309 main.go:141] libmachine: (newest-cni-422706) Getting domain xml...
	I1212 21:30:43.637931   67309 main.go:141] libmachine: (newest-cni-422706) Creating domain...
	I1212 21:30:44.912579   67309 main.go:141] libmachine: (newest-cni-422706) Waiting to get IP...
	I1212 21:30:44.913452   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:44.913801   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:44.913891   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:44.913795   67344 retry.go:31] will retry after 201.193598ms: waiting for machine to come up
	I1212 21:30:45.116325   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:45.116952   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:45.116989   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:45.116876   67344 retry.go:31] will retry after 378.928404ms: waiting for machine to come up
	I1212 21:30:45.497378   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:45.497829   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:45.497853   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:45.497776   67344 retry.go:31] will retry after 395.425408ms: waiting for machine to come up
	I1212 21:30:45.894305   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:45.894748   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:45.894770   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:45.894722   67344 retry.go:31] will retry after 501.520185ms: waiting for machine to come up
	I1212 21:30:46.397311   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:46.397780   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:46.397803   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:46.397726   67344 retry.go:31] will retry after 587.486964ms: waiting for machine to come up
	I1212 21:30:46.986459   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:46.988250   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:46.988293   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:46.988130   67344 retry.go:31] will retry after 910.026428ms: waiting for machine to come up
	I1212 21:30:47.899682   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:47.900147   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:47.900175   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:47.900100   67344 retry.go:31] will retry after 1.092954286s: waiting for machine to come up
	I1212 21:30:48.994909   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:48.995398   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:48.995428   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:48.995353   67344 retry.go:31] will retry after 1.081223185s: waiting for machine to come up
	I1212 21:30:50.077929   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:50.078385   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:50.078407   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:50.078332   67344 retry.go:31] will retry after 1.609230983s: waiting for machine to come up
	I1212 21:30:51.690011   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:51.690456   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:51.690491   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:51.690401   67344 retry.go:31] will retry after 1.542334592s: waiting for machine to come up
	I1212 21:30:53.234536   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:53.234853   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:53.234890   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:53.234774   67344 retry.go:31] will retry after 2.858549698s: waiting for machine to come up
	I1212 21:30:56.095135   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:56.095683   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:56.095714   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:56.095623   67344 retry.go:31] will retry after 2.56857983s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:09:17 UTC, ends at Tue 2023-12-12 21:30:59 UTC. --
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.009421348Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c3f151c8-69ac-4783-b525-035f3955a799,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415399944238065,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T21:09:51.972971179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zj5wn,Uid:8f51596e-d7e1-40de-9394-5788ff7fde7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415399640683
912,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T21:09:51.972975785Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05ba6512412b4f50875e320705c7ce71bfc731ff1a4b0f9ce6b5f56b092bf342,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-v978l,Uid:5870eb0c-b40b-4fc5-bf09-de1ed799993c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415396046150638,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-v978l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5870eb0c-b40b-4fc5-bf09-de1ed799993c,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T21:09:51.
972983979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a48c6632-0d79-4b43-ad2b-78c090c9b1f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415392342991814,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-12T21:09:51.972968801Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&PodSandboxMetadata{Name:kube-proxy-nsv4w,Uid:621a8605-777d-4fab-8884-16de1091e792,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415392315076312,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-777d-4fab-8884-16de1091e792,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2023-12-12T21:09:51.972979177Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-831188,Uid:d237398c7af5429d966c72c07b5538ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385511379089,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d966c72c07b5538ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d237398c7af5429d966c72c07b5538ba,kubernetes.io/config.seen: 2023-12-12T21:09:44.968239057Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-831188,Uid:1b5bc1d0aeeed3fa69e39920f199d3e
4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385499837429,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1b5bc1d0aeeed3fa69e39920f199d3e4,kubernetes.io/config.seen: 2023-12-12T21:09:44.968238196Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-831188,Uid:ae7f31f59995b6074da63b24822c15b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385490848276,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f599
95b6074da63b24822c15b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.163:2379,kubernetes.io/config.hash: ae7f31f59995b6074da63b24822c15b8,kubernetes.io/config.seen: 2023-12-12T21:09:44.968232926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-831188,Uid:4bc6a9c01130e3674685653344c69aea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702415385486624631,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.163:8443,kubernetes.io/config.hash: 4bc6a9c01130e3674685653344
c69aea,kubernetes.io/config.seen: 2023-12-12T21:09:44.968236871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=38ff6722-6a10-40fe-bb05-0efefc29b174 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.010275630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb7164e0-56ea-4309-8ec1-1ad54a2149d3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.010333197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb7164e0-56ea-4309-8ec1-1ad54a2149d3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.010502569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d
966c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{i
o.kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map
[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb7164e0-56ea-4309-8ec1-1ad54a2149d3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.051210969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=53886ee4-007f-4b29-99cd-dc38dcacd258 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.051269802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=53886ee4-007f-4b29-99cd-dc38dcacd258 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.052859733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=25ae55ad-2533-417c-acd5-c2c55f2a2b7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.053235418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416659053220015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=25ae55ad-2533-417c-acd5-c2c55f2a2b7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.054069974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7aca3a8c-6d3e-4d00-92b6-7c76bcae2d20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.054116660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7aca3a8c-6d3e-4d00-92b6-7c76bcae2d20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.054296703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7aca3a8c-6d3e-4d00-92b6-7c76bcae2d20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.097469639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=95a2d87b-83e1-47cf-9ab5-71caccb16b9b name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.097528677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=95a2d87b-83e1-47cf-9ab5-71caccb16b9b name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.099234389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=716bfe82-419d-4ad3-981f-b02d79cfd2eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.099596446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416659099584435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=716bfe82-419d-4ad3-981f-b02d79cfd2eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.100594256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d6de4b16-8c2e-4a4c-8b43-1d3ae9f51d2e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.100650856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d6de4b16-8c2e-4a4c-8b43-1d3ae9f51d2e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.100914971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d6de4b16-8c2e-4a4c-8b43-1d3ae9f51d2e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.137881580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3b6500a5-f7f1-4d6c-84c2-ebb50eec2e5f name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.137938747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3b6500a5-f7f1-4d6c-84c2-ebb50eec2e5f name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.139193867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8d62be0e-3bf7-4ecf-b2e1-42ea33aa2f07 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.139545499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416659139534055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8d62be0e-3bf7-4ecf-b2e1-42ea33aa2f07 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.140229673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4d06509a-2f5e-4a02-9cf2-30094dbdc2e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.140293460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4d06509a-2f5e-4a02-9cf2-30094dbdc2e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:59 embed-certs-831188 crio[716]: time="2023-12-12 21:30:59.140489123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415424321549540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d79-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a52d85abb3d432d77c19849fb4cbb857b542e5a4b98036746db7ac5811eab5,PodSandboxId:f57cc23b614989cf11ff9a0c998c10c204a858bef38345b7b44ca914539f6a9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415402101159873,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c3f151c8-69ac-4783-b525-035f3955a799,},Annotations:map[string]string{io.kubernetes.container.hash: 8dffc520,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843,PodSandboxId:b79746546c948725b31bbf1ddfbf93939da3cadf60d621ce1b0dd7512f2c1b13,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415400336352080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zj5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f51596e-d7e1-40de-9394-5788ff7fde7b,},Annotations:map[string]string{io.kubernetes.container.hash: dbbf757,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f,PodSandboxId:45b833dcc94fd9ac9cc998a930220017a8ddd0c5169308626e017d2c72299b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415394066369189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsv4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621a8605-7
77d-4fab-8884-16de1091e792,},Annotations:map[string]string{io.kubernetes.container.hash: eba361c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653,PodSandboxId:77dd00140750bb9cf007914bb5edc03cfda5215a57e0109974e042d1aee6eb15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415393956072581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c6632-0d7
9-4b43-ad2b-78c090c9b1f8,},Annotations:map[string]string{io.kubernetes.container.hash: a3595c79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470,PodSandboxId:a8e06ca0d1aeaaacaee58abcd9753bd5022433e3da39151391cb4aeec413a274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415386885153205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d237398c7af5429d96
6c72c07b5538ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be,PodSandboxId:f76c5991fd388e49d610ef3715e66e4c39ec23dab1893c533eee44bf253c0969,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415386732909569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae7f31f59995b6074da63b24822c15b8,},Annotations:map[string]string{io.
kubernetes.container.hash: 24a98b3f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e,PodSandboxId:b168b4263329fc0a43199e4551a5297558e5c2dad33ba1b1282d02cf9ef959b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415386008301027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5bc1d0aeeed3fa69e39920f199d3e4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2,PodSandboxId:1cabfc321a2f035860b5371d62a01a04f638e429795429112f96c808ac2d551b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415385984454312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831188,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc6a9c01130e3674685653344c69aea,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4d06509a-2f5e-4a02-9cf2-30094dbdc2e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1703f1d5be8cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   77dd00140750b       storage-provisioner
	d0a52d85abb3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   f57cc23b61498       busybox
	41483ce2844cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   b79746546c948       coredns-5dd5756b68-zj5wn
	bc1393c2dcb25       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   45b833dcc94fd       kube-proxy-nsv4w
	0285b9b54f023       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   77dd00140750b       storage-provisioner
	6a76cf81a377e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   a8e06ca0d1aea       kube-scheduler-embed-certs-831188
	aa3b65804db3f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   f76c5991fd388       etcd-embed-certs-831188
	a8ada7ed54f93       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   b168b4263329f       kube-controller-manager-embed-certs-831188
	c8c7037baeaee       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   1cabfc321a2f0       kube-apiserver-embed-certs-831188
	
	
	==> coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54396 - 61564 "HINFO IN 667314211497334327.1269787668080689230. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008873466s
	
	
	==> describe nodes <==
	Name:               embed-certs-831188
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-831188
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=embed-certs-831188
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_01_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:01:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-831188
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 21:30:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:30:49 +0000   Tue, 12 Dec 2023 21:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:30:49 +0000   Tue, 12 Dec 2023 21:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:30:49 +0000   Tue, 12 Dec 2023 21:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:30:49 +0000   Tue, 12 Dec 2023 21:10:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.163
	  Hostname:    embed-certs-831188
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0060569b9eb9492eba6d6021718c1259
	  System UUID:                0060569b-9eb9-492e-ba6d-6021718c1259
	  Boot ID:                    33626dbd-5e61-42d3-9329-56af64902a4b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-zj5wn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-831188                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-831188             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-831188    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-nsv4w                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-831188             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-v978l               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-831188 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-831188 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-831188 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-831188 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node embed-certs-831188 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-831188 event: Registered Node embed-certs-831188 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-831188 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-831188 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-831188 event: Registered Node embed-certs-831188 in Controller
	
	
	==> dmesg <==
	[Dec12 21:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069842] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.417665] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.559118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152673] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.451102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.183510] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.111389] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.152932] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.114050] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.238035] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.208630] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[ +15.004631] kauditd_printk_skb: 19 callbacks suppressed
	[Dec12 21:10] hrtimer: interrupt took 5107291 ns
	
	
	==> etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] <==
	{"level":"info","ts":"2023-12-12T21:10:23.026275Z","caller":"traceutil/trace.go:171","msg":"trace[1085323657] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"641.654385ms","start":"2023-12-12T21:10:22.384607Z","end":"2023-12-12T21:10:23.026262Z","steps":["trace[1085323657] 'process raft request'  (duration: 641.620162ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.026468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.384592Z","time spent":"641.801894ms","remote":"127.0.0.1:48206","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5738,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/embed-certs-831188\" mod_revision:562 > success:<request_put:<key:\"/registry/minions/embed-certs-831188\" value_size:5694 >> failure:<request_range:<key:\"/registry/minions/embed-certs-831188\" > >"}
	{"level":"info","ts":"2023-12-12T21:10:23.026892Z","caller":"traceutil/trace.go:171","msg":"trace[185495373] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"857.38573ms","start":"2023-12-12T21:10:22.169491Z","end":"2023-12-12T21:10:23.026876Z","steps":["trace[185495373] 'process raft request'  (duration: 856.70121ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.027026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.169474Z","time spent":"857.50333ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-831188\" mod_revision:584 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-831188\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-831188\" > >"}
	{"level":"info","ts":"2023-12-12T21:10:23.027114Z","caller":"traceutil/trace.go:171","msg":"trace[45005210] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"996.030041ms","start":"2023-12-12T21:10:22.031069Z","end":"2023-12-12T21:10:23.027099Z","steps":["trace[45005210] 'process raft request'  (duration: 935.884906ms)","trace[45005210] 'compare'  (duration: 59.111224ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T21:10:23.027229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.031058Z","time spent":"996.132207ms","remote":"127.0.0.1:48208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4056,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" mod_revision:581 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" value_size:3990 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" > >"}
	{"level":"info","ts":"2023-12-12T21:10:23.088758Z","caller":"traceutil/trace.go:171","msg":"trace[469649662] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:624; }","duration":"668.242757ms","start":"2023-12-12T21:10:22.420425Z","end":"2023-12-12T21:10:23.088668Z","steps":["trace[469649662] 'read index received'  (duration: 546.402678ms)","trace[469649662] 'applied index is now lower than readState.Index'  (duration: 121.838921ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T21:10:23.088942Z","caller":"traceutil/trace.go:171","msg":"trace[2010594245] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"675.813493ms","start":"2023-12-12T21:10:22.413117Z","end":"2023-12-12T21:10:23.08893Z","steps":["trace[2010594245] 'process raft request'  (duration: 675.415887ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.089062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.014524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.163\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2023-12-12T21:10:23.089134Z","caller":"traceutil/trace.go:171","msg":"trace[1617143922] range","detail":"{range_begin:/registry/masterleases/192.168.50.163; range_end:; response_count:1; response_revision:592; }","duration":"177.101026ms","start":"2023-12-12T21:10:22.912022Z","end":"2023-12-12T21:10:23.089123Z","steps":["trace[1617143922] 'agreement among raft nodes before linearized reading'  (duration: 176.973964ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.08924Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.413096Z","time spent":"675.958407ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-a4vo6d4pdmy2ttomkw477gqi2i\" mod_revision:585 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-a4vo6d4pdmy2ttomkw477gqi2i\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-a4vo6d4pdmy2ttomkw477gqi2i\" > >"}
	{"level":"warn","ts":"2023-12-12T21:10:23.089314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"668.924494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" ","response":"range_response_count:1 size:4071"}
	{"level":"info","ts":"2023-12-12T21:10:23.09024Z","caller":"traceutil/trace.go:171","msg":"trace[817726128] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l; range_end:; response_count:1; response_revision:592; }","duration":"669.848602ms","start":"2023-12-12T21:10:22.420381Z","end":"2023-12-12T21:10:23.090229Z","steps":["trace[817726128] 'agreement among raft nodes before linearized reading'  (duration: 668.903836ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:10:23.090299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:10:22.420364Z","time spent":"669.918803ms","remote":"127.0.0.1:48208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4094,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-v978l\" "}
	{"level":"info","ts":"2023-12-12T21:19:50.024472Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":823}
	{"level":"info","ts":"2023-12-12T21:19:50.027372Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":823,"took":"2.442291ms","hash":1401188404}
	{"level":"info","ts":"2023-12-12T21:19:50.027542Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1401188404,"revision":823,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T21:24:50.032345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1065}
	{"level":"info","ts":"2023-12-12T21:24:50.035249Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1065,"took":"2.221086ms","hash":1188151291}
	{"level":"info","ts":"2023-12-12T21:24:50.035383Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1188151291,"revision":1065,"compact-revision":823}
	{"level":"info","ts":"2023-12-12T21:29:50.043174Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1309}
	{"level":"info","ts":"2023-12-12T21:29:50.04708Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1309,"took":"3.432099ms","hash":1820363439}
	{"level":"info","ts":"2023-12-12T21:29:50.047135Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1820363439,"revision":1309,"compact-revision":1065}
	{"level":"info","ts":"2023-12-12T21:30:11.390043Z","caller":"traceutil/trace.go:171","msg":"trace[509867555] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"393.706704ms","start":"2023-12-12T21:30:10.996276Z","end":"2023-12-12T21:30:11.389983Z","steps":["trace[509867555] 'process raft request'  (duration: 393.511442ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:30:11.390468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:30:10.996257Z","time spent":"393.973686ms","remote":"127.0.0.1:48204","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1568 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 21:30:59 up 21 min,  0 users,  load average: 0.16, 0.10, 0.14
	Linux embed-certs-831188 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] <==
	W1212 21:27:52.673827       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:27:52.674024       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:27:52.674074       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:28:51.480164       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 21:29:51.481188       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:29:51.676381       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:29:51.676532       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:29:51.676951       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:29:52.676851       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:29:52.677126       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:29:52.677272       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:29:52.677014       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:29:52.677364       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:29:52.678593       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:30:51.480161       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:30:52.678494       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:30:52.678771       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:30:52.678810       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:30:52.678875       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:30:52.678904       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:30:52.680745       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] <==
	I1212 21:25:04.915535       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:25:34.376017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:25:34.924437       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:26:04.382852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:26:04.933140       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:26:08.033762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="282.248µs"
	I1212 21:26:21.032776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="200.02µs"
	E1212 21:26:34.388682       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:26:34.943084       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:27:04.394420       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:27:04.952915       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:27:34.400931       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:27:34.962345       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:28:04.407852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:28:04.976259       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:28:34.414592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:28:34.984926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:29:04.423470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:29:04.996313       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:29:34.429847       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:29:35.006649       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:30:04.438195       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:30:05.025928       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:30:34.444812       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:30:35.039679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] <==
	I1212 21:09:54.296343       1 server_others.go:69] "Using iptables proxy"
	I1212 21:09:54.313405       1 node.go:141] Successfully retrieved node IP: 192.168.50.163
	I1212 21:09:54.377951       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 21:09:54.378009       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 21:09:54.383758       1 server_others.go:152] "Using iptables Proxier"
	I1212 21:09:54.383918       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 21:09:54.384298       1 server.go:846] "Version info" version="v1.28.4"
	I1212 21:09:54.384355       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:09:54.385295       1 config.go:188] "Starting service config controller"
	I1212 21:09:54.385361       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 21:09:54.385418       1 config.go:97] "Starting endpoint slice config controller"
	I1212 21:09:54.385443       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 21:09:54.387343       1 config.go:315] "Starting node config controller"
	I1212 21:09:54.387612       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 21:09:54.485920       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 21:09:54.485959       1 shared_informer.go:318] Caches are synced for service config
	I1212 21:09:54.488110       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] <==
	W1212 21:09:51.642460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:09:51.642508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 21:09:51.642580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 21:09:51.642592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 21:09:51.642643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:09:51.642652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 21:09:51.642766       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:09:51.642780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 21:09:51.647035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 21:09:51.647093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 21:09:51.647166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:09:51.647175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 21:09:51.647226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 21:09:51.647235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 21:09:51.647277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 21:09:51.647285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 21:09:51.647331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:09:51.647340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 21:09:51.650087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 21:09:51.650148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 21:09:51.660149       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:09:51.660210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 21:09:51.660293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 21:09:51.660304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1212 21:09:53.222107       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:09:17 UTC, ends at Tue 2023-12-12 21:30:59 UTC. --
	Dec 12 21:28:32 embed-certs-831188 kubelet[924]: E1212 21:28:32.010019     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:28:45 embed-certs-831188 kubelet[924]: E1212 21:28:45.011013     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:28:45 embed-certs-831188 kubelet[924]: E1212 21:28:45.027841     924 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:28:45 embed-certs-831188 kubelet[924]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:28:45 embed-certs-831188 kubelet[924]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:28:45 embed-certs-831188 kubelet[924]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:28:56 embed-certs-831188 kubelet[924]: E1212 21:28:56.009371     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:29:11 embed-certs-831188 kubelet[924]: E1212 21:29:11.009898     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:29:24 embed-certs-831188 kubelet[924]: E1212 21:29:24.010087     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:29:37 embed-certs-831188 kubelet[924]: E1212 21:29:37.009324     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:29:45 embed-certs-831188 kubelet[924]: E1212 21:29:45.036378     924 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:29:45 embed-certs-831188 kubelet[924]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:29:45 embed-certs-831188 kubelet[924]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:29:45 embed-certs-831188 kubelet[924]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:29:45 embed-certs-831188 kubelet[924]: E1212 21:29:45.050512     924 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 12 21:29:50 embed-certs-831188 kubelet[924]: E1212 21:29:50.009448     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:30:02 embed-certs-831188 kubelet[924]: E1212 21:30:02.009440     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:30:13 embed-certs-831188 kubelet[924]: E1212 21:30:13.010081     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:30:26 embed-certs-831188 kubelet[924]: E1212 21:30:26.009939     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:30:40 embed-certs-831188 kubelet[924]: E1212 21:30:40.009879     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	Dec 12 21:30:45 embed-certs-831188 kubelet[924]: E1212 21:30:45.024159     924 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:30:45 embed-certs-831188 kubelet[924]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:30:45 embed-certs-831188 kubelet[924]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:30:45 embed-certs-831188 kubelet[924]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:30:51 embed-certs-831188 kubelet[924]: E1212 21:30:51.009549     924 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v978l" podUID="5870eb0c-b40b-4fc5-bf09-de1ed799993c"
	
	
	==> storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] <==
	I1212 21:09:54.207217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 21:10:24.213631       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] <==
	I1212 21:10:24.488158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:10:24.501805       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:10:24.502236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:10:41.915442       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:10:41.916131       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-831188_7283bd0a-dad0-48c5-92a8-289512fb0d28!
	I1212 21:10:41.917683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9972a0d0-bc39-4530-9b64-42ff37a1ad1e", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-831188_7283bd0a-dad0-48c5-92a8-289512fb0d28 became leader
	I1212 21:10:42.016527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-831188_7283bd0a-dad0-48c5-92a8-289512fb0d28!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831188 -n embed-certs-831188
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-831188 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-v978l
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-831188 describe pod metrics-server-57f55c9bc5-v978l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-831188 describe pod metrics-server-57f55c9bc5-v978l: exit status 1 (64.929638ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-v978l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-831188 describe pod metrics-server-57f55c9bc5-v978l: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (459.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (448.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:24:39.384469   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:24:42.874836   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:31:37.735056294 +0000 UTC m=+5698.955228862
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-171828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.289µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-171828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-171828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-171828 logs -n 25: (1.232314015s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:29 UTC | 12 Dec 23 21:29 UTC |
	| start   | -p newest-cni-422706 --memory=2200 --alsologtostderr   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:29 UTC | 12 Dec 23 21:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	| addons  | enable metrics-server -p newest-cni-422706             | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-422706                                   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-422706                  | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-422706 --memory=2200 --alsologtostderr   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:30 UTC | 12 Dec 23 21:31 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:31 UTC | 12 Dec 23 21:31 UTC |
	| image   | newest-cni-422706 image list                           | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:31 UTC | 12 Dec 23 21:31 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-422706                                   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:31 UTC | 12 Dec 23 21:31 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-422706                                   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:31 UTC | 12 Dec 23 21:31 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-422706                                   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:31 UTC | 12 Dec 23 21:31 UTC |
	| delete  | -p newest-cni-422706                                   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:31 UTC | 12 Dec 23 21:31 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:30:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:30:43.504910   67309 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:30:43.505160   67309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:43.505170   67309 out.go:309] Setting ErrFile to fd 2...
	I1212 21:30:43.505175   67309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:30:43.505410   67309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:30:43.505986   67309 out.go:303] Setting JSON to false
	I1212 21:30:43.506958   67309 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7998,"bootTime":1702408646,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:30:43.507020   67309 start.go:138] virtualization: kvm guest
	I1212 21:30:43.508833   67309 out.go:177] * [newest-cni-422706] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:30:43.510707   67309 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:30:43.510727   67309 notify.go:220] Checking for updates...
	I1212 21:30:43.512011   67309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:30:43.513451   67309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:30:43.514693   67309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:30:43.516637   67309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:30:43.518173   67309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:30:43.520106   67309 config.go:182] Loaded profile config "newest-cni-422706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:30:43.520689   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:30:43.520744   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:30:43.535079   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I1212 21:30:43.535472   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:30:43.536043   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:30:43.536068   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:30:43.536493   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:30:43.536685   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:43.536926   67309 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:30:43.537208   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:30:43.537239   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:30:43.551973   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I1212 21:30:43.552366   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:30:43.552898   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:30:43.552928   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:30:43.553258   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:30:43.553444   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:43.590100   67309 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:30:43.591354   67309 start.go:298] selected driver: kvm2
	I1212 21:30:43.591371   67309 start.go:902] validating driver "kvm2" against &{Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:30:43.591483   67309 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:30:43.592410   67309 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:43.592519   67309 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:30:43.609387   67309 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:30:43.609771   67309 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:30:43.609832   67309 cni.go:84] Creating CNI manager for ""
	I1212 21:30:43.609846   67309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:30:43.609858   67309 start_flags.go:323] config:
	{Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:30:43.609987   67309 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:30:43.611680   67309 out.go:177] * Starting control plane node newest-cni-422706 in cluster newest-cni-422706
	I1212 21:30:43.613027   67309 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:30:43.613067   67309 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 21:30:43.613089   67309 cache.go:56] Caching tarball of preloaded images
	I1212 21:30:43.613194   67309 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:30:43.613215   67309 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 21:30:43.613343   67309 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/config.json ...
	I1212 21:30:43.613528   67309 start.go:365] acquiring machines lock for newest-cni-422706: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:30:43.613572   67309 start.go:369] acquired machines lock for "newest-cni-422706" in 25.963µs
	I1212 21:30:43.613589   67309 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:30:43.613597   67309 fix.go:54] fixHost starting: 
	I1212 21:30:43.613866   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:30:43.613907   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:30:43.627489   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I1212 21:30:43.627963   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:30:43.628504   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:30:43.628526   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:30:43.628826   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:30:43.629060   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:43.629257   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:30:43.630824   67309 fix.go:102] recreateIfNeeded on newest-cni-422706: state=Stopped err=<nil>
	I1212 21:30:43.630863   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	W1212 21:30:43.631026   67309 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:30:43.633819   67309 out.go:177] * Restarting existing kvm2 VM for "newest-cni-422706" ...
	I1212 21:30:43.635625   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Start
	I1212 21:30:43.635850   67309 main.go:141] libmachine: (newest-cni-422706) Ensuring networks are active...
	I1212 21:30:43.636650   67309 main.go:141] libmachine: (newest-cni-422706) Ensuring network default is active
	I1212 21:30:43.636963   67309 main.go:141] libmachine: (newest-cni-422706) Ensuring network mk-newest-cni-422706 is active
	I1212 21:30:43.637262   67309 main.go:141] libmachine: (newest-cni-422706) Getting domain xml...
	I1212 21:30:43.637931   67309 main.go:141] libmachine: (newest-cni-422706) Creating domain...
	I1212 21:30:44.912579   67309 main.go:141] libmachine: (newest-cni-422706) Waiting to get IP...
	I1212 21:30:44.913452   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:44.913801   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:44.913891   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:44.913795   67344 retry.go:31] will retry after 201.193598ms: waiting for machine to come up
	I1212 21:30:45.116325   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:45.116952   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:45.116989   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:45.116876   67344 retry.go:31] will retry after 378.928404ms: waiting for machine to come up
	I1212 21:30:45.497378   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:45.497829   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:45.497853   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:45.497776   67344 retry.go:31] will retry after 395.425408ms: waiting for machine to come up
	I1212 21:30:45.894305   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:45.894748   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:45.894770   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:45.894722   67344 retry.go:31] will retry after 501.520185ms: waiting for machine to come up
	I1212 21:30:46.397311   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:46.397780   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:46.397803   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:46.397726   67344 retry.go:31] will retry after 587.486964ms: waiting for machine to come up
	I1212 21:30:46.986459   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:46.988250   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:46.988293   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:46.988130   67344 retry.go:31] will retry after 910.026428ms: waiting for machine to come up
	I1212 21:30:47.899682   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:47.900147   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:47.900175   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:47.900100   67344 retry.go:31] will retry after 1.092954286s: waiting for machine to come up
	I1212 21:30:48.994909   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:48.995398   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:48.995428   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:48.995353   67344 retry.go:31] will retry after 1.081223185s: waiting for machine to come up
	I1212 21:30:50.077929   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:50.078385   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:50.078407   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:50.078332   67344 retry.go:31] will retry after 1.609230983s: waiting for machine to come up
	I1212 21:30:51.690011   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:51.690456   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:51.690491   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:51.690401   67344 retry.go:31] will retry after 1.542334592s: waiting for machine to come up
	I1212 21:30:53.234536   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:53.234853   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:53.234890   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:53.234774   67344 retry.go:31] will retry after 2.858549698s: waiting for machine to come up
	I1212 21:30:56.095135   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:56.095683   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:56.095714   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:56.095623   67344 retry.go:31] will retry after 2.56857983s: waiting for machine to come up
	I1212 21:30:58.665840   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:58.666347   67309 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:30:58.666379   67309 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:30:58.666321   67344 retry.go:31] will retry after 3.697434771s: waiting for machine to come up
	I1212 21:31:02.368372   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.368798   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has current primary IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.368831   67309 main.go:141] libmachine: (newest-cni-422706) Found IP for machine: 192.168.39.163
	I1212 21:31:02.368854   67309 main.go:141] libmachine: (newest-cni-422706) Reserving static IP address...
	I1212 21:31:02.369236   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "newest-cni-422706", mac: "52:54:00:b4:d1:77", ip: "192.168.39.163"} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.369266   67309 main.go:141] libmachine: (newest-cni-422706) Reserved static IP address: 192.168.39.163
	I1212 21:31:02.369290   67309 main.go:141] libmachine: (newest-cni-422706) DBG | skip adding static IP to network mk-newest-cni-422706 - found existing host DHCP lease matching {name: "newest-cni-422706", mac: "52:54:00:b4:d1:77", ip: "192.168.39.163"}
	I1212 21:31:02.369311   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Getting to WaitForSSH function...
	I1212 21:31:02.369323   67309 main.go:141] libmachine: (newest-cni-422706) Waiting for SSH to be available...
	I1212 21:31:02.371475   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.371777   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.371811   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.371891   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Using SSH client type: external
	I1212 21:31:02.371930   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa (-rw-------)
	I1212 21:31:02.371966   67309 main.go:141] libmachine: (newest-cni-422706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:31:02.371984   67309 main.go:141] libmachine: (newest-cni-422706) DBG | About to run SSH command:
	I1212 21:31:02.371997   67309 main.go:141] libmachine: (newest-cni-422706) DBG | exit 0
	I1212 21:31:02.459118   67309 main.go:141] libmachine: (newest-cni-422706) DBG | SSH cmd err, output: <nil>: 
	I1212 21:31:02.459536   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetConfigRaw
	I1212 21:31:02.460117   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:31:02.462344   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.462698   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.462731   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.462955   67309 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/config.json ...
	I1212 21:31:02.463133   67309 machine.go:88] provisioning docker machine ...
	I1212 21:31:02.463150   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:02.463348   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:31:02.463512   67309 buildroot.go:166] provisioning hostname "newest-cni-422706"
	I1212 21:31:02.463528   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:31:02.463641   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:02.465982   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.466347   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.466364   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.466548   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:02.466726   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:02.466846   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:02.467010   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:02.467175   67309 main.go:141] libmachine: Using SSH client type: native
	I1212 21:31:02.467549   67309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:31:02.467564   67309 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-422706 && echo "newest-cni-422706" | sudo tee /etc/hostname
	I1212 21:31:02.597638   67309 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-422706
	
	I1212 21:31:02.597669   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:02.600324   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.600625   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.600658   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.600778   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:02.600964   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:02.601149   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:02.601267   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:02.601412   67309 main.go:141] libmachine: Using SSH client type: native
	I1212 21:31:02.601713   67309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:31:02.601730   67309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-422706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-422706/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-422706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:31:02.727652   67309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:31:02.727683   67309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:31:02.727703   67309 buildroot.go:174] setting up certificates
	I1212 21:31:02.727718   67309 provision.go:83] configureAuth start
	I1212 21:31:02.727730   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:31:02.727958   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:31:02.730352   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.730698   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.730731   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.730871   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:02.733446   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.733770   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:02.733799   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:02.733956   67309 provision.go:138] copyHostCerts
	I1212 21:31:02.734010   67309 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:31:02.734024   67309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:31:02.734095   67309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:31:02.734278   67309 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:31:02.734292   67309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:31:02.734353   67309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:31:02.734444   67309 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:31:02.734456   67309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:31:02.734491   67309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:31:02.734566   67309 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.newest-cni-422706 san=[192.168.39.163 192.168.39.163 localhost 127.0.0.1 minikube newest-cni-422706]
	I1212 21:31:03.031011   67309 provision.go:172] copyRemoteCerts
	I1212 21:31:03.031065   67309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:31:03.031087   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:03.033962   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.034299   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.034321   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.034467   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:03.034691   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.034863   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:03.034992   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:03.120110   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:31:03.143437   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:31:03.166816   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:31:03.190343   67309 provision.go:86] duration metric: configureAuth took 462.611548ms
	I1212 21:31:03.190379   67309 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:31:03.190623   67309 config.go:182] Loaded profile config "newest-cni-422706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:31:03.190704   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:03.193412   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.193831   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.193854   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.193982   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:03.194166   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.194318   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.194501   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:03.194675   67309 main.go:141] libmachine: Using SSH client type: native
	I1212 21:31:03.194998   67309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:31:03.195035   67309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:31:03.513149   67309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:31:03.513178   67309 machine.go:91] provisioned docker machine in 1.05003186s
	I1212 21:31:03.513190   67309 start.go:300] post-start starting for "newest-cni-422706" (driver="kvm2")
	I1212 21:31:03.513210   67309 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:31:03.513247   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:03.513607   67309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:31:03.513641   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:03.516210   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.516599   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.516622   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.516812   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:03.517014   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.517194   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:03.517336   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:03.605662   67309 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:31:03.610030   67309 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:31:03.610059   67309 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:31:03.610136   67309 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:31:03.610215   67309 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:31:03.610299   67309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:31:03.619868   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:31:03.643047   67309 start.go:303] post-start completed in 129.841712ms
	I1212 21:31:03.643073   67309 fix.go:56] fixHost completed within 20.029475924s
	I1212 21:31:03.643145   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:03.645948   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.646296   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.646351   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.646423   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:03.646625   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.646801   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.646928   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:03.647099   67309 main.go:141] libmachine: Using SSH client type: native
	I1212 21:31:03.647457   67309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:31:03.647470   67309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:31:03.764012   67309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702416663.730505938
	
	I1212 21:31:03.764033   67309 fix.go:206] guest clock: 1702416663.730505938
	I1212 21:31:03.764048   67309 fix.go:219] Guest: 2023-12-12 21:31:03.730505938 +0000 UTC Remote: 2023-12-12 21:31:03.643076909 +0000 UTC m=+20.188803583 (delta=87.429029ms)
	I1212 21:31:03.764090   67309 fix.go:190] guest clock delta is within tolerance: 87.429029ms
	I1212 21:31:03.764099   67309 start.go:83] releasing machines lock for "newest-cni-422706", held for 20.150514743s
	I1212 21:31:03.764118   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:03.764387   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:31:03.766946   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.767386   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.767421   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.767624   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:03.768132   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:03.768304   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:03.768389   67309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:31:03.768428   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:03.768527   67309 ssh_runner.go:195] Run: cat /version.json
	I1212 21:31:03.768549   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:03.770811   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.771015   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.771163   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.771188   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.771324   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:03.771481   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:03.771501   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.771506   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:03.771644   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:03.771732   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:03.771816   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:03.771890   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:03.771914   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:03.772013   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:03.860838   67309 ssh_runner.go:195] Run: systemctl --version
	I1212 21:31:03.891545   67309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:31:04.042572   67309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:31:04.049067   67309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:31:04.049152   67309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:31:04.065450   67309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:31:04.065478   67309 start.go:475] detecting cgroup driver to use...
	I1212 21:31:04.065558   67309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:31:04.081018   67309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:31:04.093577   67309 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:31:04.093646   67309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:31:04.106481   67309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:31:04.119124   67309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:31:04.223033   67309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:31:04.340519   67309 docker.go:219] disabling docker service ...
	I1212 21:31:04.340594   67309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:31:04.354445   67309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:31:04.365853   67309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:31:04.473492   67309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:31:04.579666   67309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:31:04.592011   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:31:04.609479   67309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:31:04.609571   67309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:31:04.618479   67309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:31:04.618558   67309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:31:04.628320   67309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:31:04.637941   67309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:31:04.647172   67309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:31:04.656991   67309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:31:04.665123   67309 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:31:04.665193   67309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:31:04.677309   67309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:31:04.686157   67309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:31:04.783833   67309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:31:04.944355   67309 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:31:04.944419   67309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:31:04.953131   67309 start.go:543] Will wait 60s for crictl version
	I1212 21:31:04.953184   67309 ssh_runner.go:195] Run: which crictl
	I1212 21:31:04.957295   67309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:31:04.994851   67309 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:31:04.994944   67309 ssh_runner.go:195] Run: crio --version
	I1212 21:31:05.050451   67309 ssh_runner.go:195] Run: crio --version
	I1212 21:31:05.102347   67309 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:31:05.103608   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:31:05.106544   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:05.106905   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:05.106937   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:05.107127   67309 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:31:05.112314   67309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:31:05.127211   67309 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:31:05.128744   67309 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:31:05.128827   67309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:31:05.183436   67309 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:31:05.183519   67309 ssh_runner.go:195] Run: which lz4
	I1212 21:31:05.187906   67309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:31:05.192131   67309 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:31:05.192166   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401739178 bytes)
	I1212 21:31:06.823573   67309 crio.go:444] Took 1.635713 seconds to copy over tarball
	I1212 21:31:06.823648   67309 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:31:09.794619   67309 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.970945084s)
	I1212 21:31:09.794662   67309 crio.go:451] Took 2.971046 seconds to extract the tarball
	I1212 21:31:09.794671   67309 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:31:09.833812   67309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:31:09.873891   67309 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:31:09.873919   67309 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:31:09.874018   67309 ssh_runner.go:195] Run: crio config
	I1212 21:31:09.934694   67309 cni.go:84] Creating CNI manager for ""
	I1212 21:31:09.934724   67309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:31:09.934748   67309 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1212 21:31:09.934771   67309 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-422706 NodeName:newest-cni-422706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:31:09.934949   67309 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-422706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:31:09.935059   67309 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-422706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:31:09.935132   67309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:31:09.944877   67309 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:31:09.944940   67309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:31:09.954148   67309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1212 21:31:09.970090   67309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:31:09.986560   67309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1212 21:31:10.003621   67309 ssh_runner.go:195] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:31:10.007614   67309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:31:10.019104   67309 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706 for IP: 192.168.39.163
	I1212 21:31:10.019147   67309 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:31:10.019316   67309 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:31:10.019354   67309 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:31:10.019418   67309 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/client.key
	I1212 21:31:10.019480   67309 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key.a64e5ae8
	I1212 21:31:10.019517   67309 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.key
	I1212 21:31:10.019621   67309 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:31:10.019649   67309 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:31:10.019659   67309 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:31:10.019693   67309 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:31:10.019718   67309 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:31:10.019739   67309 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:31:10.019782   67309 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:31:10.020398   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:31:10.045375   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:31:10.069200   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:31:10.094367   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:31:10.118363   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:31:10.141915   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:31:10.170570   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:31:10.198662   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:31:10.227141   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:31:10.253868   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:31:10.281321   67309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:31:10.307650   67309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:31:10.326197   67309 ssh_runner.go:195] Run: openssl version
	I1212 21:31:10.332075   67309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:31:10.342580   67309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:31:10.347569   67309 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:31:10.347629   67309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:31:10.353553   67309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:31:10.364270   67309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:31:10.374950   67309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:31:10.379612   67309 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:31:10.379660   67309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:31:10.385462   67309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:31:10.395461   67309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:31:10.405670   67309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:31:10.410399   67309 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:31:10.410457   67309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:31:10.416302   67309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:31:10.426989   67309 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:31:10.431794   67309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:31:10.437540   67309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:31:10.443174   67309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:31:10.448859   67309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:31:10.454478   67309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:31:10.460588   67309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:31:10.466285   67309 kubeadm.go:404] StartCluster: {Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false syste
m_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:31:10.466375   67309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:31:10.466431   67309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:31:10.507583   67309 cri.go:89] found id: ""
	I1212 21:31:10.507688   67309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:31:10.516998   67309 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:31:10.517020   67309 kubeadm.go:636] restartCluster start
	I1212 21:31:10.517091   67309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:31:10.525431   67309 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:10.526033   67309 kubeconfig.go:135] verify returned: extract IP: "newest-cni-422706" does not appear in /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:31:10.526285   67309 kubeconfig.go:146] "newest-cni-422706" context is missing from /home/jenkins/minikube-integration/17734-9188/kubeconfig - will repair!
	I1212 21:31:10.526793   67309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:31:10.641399   67309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:31:10.650797   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:10.650880   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:10.662526   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:10.662546   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:10.662585   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:10.673410   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:11.174128   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:11.174200   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:11.186020   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:11.673599   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:11.673692   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:11.685788   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:12.174420   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:12.174508   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:12.185909   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:12.674562   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:12.674644   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:12.686879   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:13.174458   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:13.174553   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:13.185909   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:13.673754   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:13.673830   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:13.684989   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:14.173527   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:14.173641   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:14.184562   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:14.674153   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:14.674256   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:14.685172   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:15.173716   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:15.173794   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:15.184359   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:15.673880   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:15.673993   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:15.685784   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:16.174431   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:16.174531   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:16.186946   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:16.674546   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:16.674633   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:16.685706   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:17.174300   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:17.174403   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:17.185725   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:17.674338   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:17.674443   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:17.685709   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:18.174331   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:18.174458   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:18.186714   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:18.673977   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:18.674053   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:18.685632   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:19.174222   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:19.174306   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:19.185457   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:19.673977   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:19.674067   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:19.685746   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:20.174401   67309 api_server.go:166] Checking apiserver status ...
	I1212 21:31:20.174505   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:31:20.186696   67309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:31:20.651381   67309 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:31:20.651421   67309 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:31:20.651435   67309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:31:20.651512   67309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:31:20.689465   67309 cri.go:89] found id: ""
	I1212 21:31:20.689552   67309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:31:20.706829   67309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:31:20.715898   67309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:31:20.715976   67309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:31:20.724656   67309 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:31:20.724682   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:31:20.842720   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:31:22.200335   67309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.357567295s)
	I1212 21:31:22.200368   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:31:22.401274   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:31:22.475642   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:31:22.559852   67309 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:31:22.559940   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:22.572469   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:23.084354   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:23.584501   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:24.084627   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:24.584581   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:25.084102   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:25.110760   67309 api_server.go:72] duration metric: took 2.55090332s to wait for apiserver process to appear ...
	I1212 21:31:25.110788   67309 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:31:25.110806   67309 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1212 21:31:28.531885   67309 api_server.go:279] https://192.168.39.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:31:28.531920   67309 api_server.go:103] status: https://192.168.39.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:31:28.531936   67309 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1212 21:31:28.549332   67309 api_server.go:279] https://192.168.39.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:31:28.549365   67309 api_server.go:103] status: https://192.168.39.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:31:29.049752   67309 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1212 21:31:29.055480   67309 api_server.go:279] https://192.168.39.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:31:29.055513   67309 api_server.go:103] status: https://192.168.39.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:31:29.549767   67309 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1212 21:31:29.556653   67309 api_server.go:279] https://192.168.39.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:31:29.556725   67309 api_server.go:103] status: https://192.168.39.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:31:30.050348   67309 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1212 21:31:30.056023   67309 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I1212 21:31:30.064724   67309 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:31:30.064749   67309 api_server.go:131] duration metric: took 4.953955591s to wait for apiserver health ...
	I1212 21:31:30.064759   67309 cni.go:84] Creating CNI manager for ""
	I1212 21:31:30.064767   67309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:31:30.066802   67309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:31:30.068183   67309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:31:30.080887   67309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:31:30.103946   67309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:31:30.120203   67309 system_pods.go:59] 8 kube-system pods found
	I1212 21:31:30.120234   67309 system_pods.go:61] "coredns-76f75df574-cgjwb" [b02267cd-02d7-440d-851e-8342fd419692] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:31:30.120241   67309 system_pods.go:61] "etcd-newest-cni-422706" [6b21d8d5-157c-417a-b47f-419955619e3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:31:30.120248   67309 system_pods.go:61] "kube-apiserver-newest-cni-422706" [f36e1fcb-2aaa-4b93-a242-5ebaa9e1ab1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:31:30.120255   67309 system_pods.go:61] "kube-controller-manager-newest-cni-422706" [c1ca2e25-9fd5-4d27-b829-1745dcf91120] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:31:30.120260   67309 system_pods.go:61] "kube-proxy-chfvw" [a893f7b0-f36e-45df-9f3b-855ed582b29d] Running
	I1212 21:31:30.120279   67309 system_pods.go:61] "kube-scheduler-newest-cni-422706" [c452c431-c884-4d2a-a2d3-a386015ee529] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:31:30.120293   67309 system_pods.go:61] "metrics-server-57f55c9bc5-hlb6j" [3b8845c1-2c27-4aae-a62e-d211f6c78623] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:31:30.120302   67309 system_pods.go:61] "storage-provisioner" [ad7dd48a-e001-4f5f-baa8-695c65b78256] Running
	I1212 21:31:30.120309   67309 system_pods.go:74] duration metric: took 16.339313ms to wait for pod list to return data ...
	I1212 21:31:30.120319   67309 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:31:30.123472   67309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:31:30.123502   67309 node_conditions.go:123] node cpu capacity is 2
	I1212 21:31:30.123513   67309 node_conditions.go:105] duration metric: took 3.187527ms to run NodePressure ...
	I1212 21:31:30.123529   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:31:30.392196   67309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:31:30.402455   67309 ops.go:34] apiserver oom_adj: -16
	I1212 21:31:30.402476   67309 kubeadm.go:640] restartCluster took 19.885448325s
	I1212 21:31:30.402486   67309 kubeadm.go:406] StartCluster complete in 19.93620697s
	I1212 21:31:30.402506   67309 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:31:30.402583   67309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:31:30.403740   67309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:31:30.403970   67309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:31:30.404008   67309 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:31:30.404095   67309 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-422706"
	I1212 21:31:30.404107   67309 addons.go:69] Setting default-storageclass=true in profile "newest-cni-422706"
	I1212 21:31:30.404120   67309 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-422706"
	W1212 21:31:30.404129   67309 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:31:30.404149   67309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-422706"
	I1212 21:31:30.404164   67309 addons.go:69] Setting dashboard=true in profile "newest-cni-422706"
	I1212 21:31:30.404201   67309 addons.go:231] Setting addon dashboard=true in "newest-cni-422706"
	I1212 21:31:30.404202   67309 host.go:66] Checking if "newest-cni-422706" exists ...
	W1212 21:31:30.404212   67309 addons.go:240] addon dashboard should already be in state true
	I1212 21:31:30.404191   67309 addons.go:69] Setting metrics-server=true in profile "newest-cni-422706"
	I1212 21:31:30.404232   67309 config.go:182] Loaded profile config "newest-cni-422706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:31:30.404255   67309 addons.go:231] Setting addon metrics-server=true in "newest-cni-422706"
	W1212 21:31:30.404266   67309 addons.go:240] addon metrics-server should already be in state true
	I1212 21:31:30.404280   67309 host.go:66] Checking if "newest-cni-422706" exists ...
	I1212 21:31:30.404349   67309 host.go:66] Checking if "newest-cni-422706" exists ...
	I1212 21:31:30.404607   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.404618   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.404645   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.404646   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.404664   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.404731   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.404736   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.404774   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.408699   67309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-422706" context rescaled to 1 replicas
	I1212 21:31:30.408735   67309 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:31:30.411698   67309 out.go:177] * Verifying Kubernetes components...
	I1212 21:31:30.413126   67309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:31:30.422469   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I1212 21:31:30.422866   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.423368   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.423392   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.423709   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I1212 21:31:30.423840   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42221
	I1212 21:31:30.423966   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1212 21:31:30.423989   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.424240   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.424348   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.424372   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.424585   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.424664   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.424716   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.424731   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.424761   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.424786   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.425079   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.425099   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.425128   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.425144   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.425725   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.425752   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.425759   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.425773   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.426014   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.426221   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:31:30.429057   67309 addons.go:231] Setting addon default-storageclass=true in "newest-cni-422706"
	W1212 21:31:30.429073   67309 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:31:30.429092   67309 host.go:66] Checking if "newest-cni-422706" exists ...
	I1212 21:31:30.429425   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.429474   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.442030   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1212 21:31:30.442439   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.442675   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46343
	I1212 21:31:30.442941   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.442958   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.443063   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.443229   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.443399   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:31:30.444689   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I1212 21:31:30.445009   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:30.445104   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.445170   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.445184   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.447203   67309 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 21:31:30.445457   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.445763   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.447233   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I1212 21:31:30.448664   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.450062   67309 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1212 21:31:30.448931   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:31:30.449004   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.449014   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.451389   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 21:31:30.451408   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 21:31:30.451426   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:30.451476   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:31:30.451820   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.451843   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.452139   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.453246   67309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:31:30.453280   67309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:31:30.454242   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:30.454590   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:30.456013   67309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:31:30.454655   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.455204   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:30.457558   67309 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:31:30.456054   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:30.457587   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:31:30.457688   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:30.458825   67309 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:31:30.460137   67309 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:31:30.460152   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:31:30.460165   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:30.458881   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.458888   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:30.458959   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:30.460361   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:30.463688   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.463944   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.464196   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:30.464230   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.464398   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:30.464456   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:30.464473   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.464626   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:30.464648   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:30.464798   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:30.464830   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:30.464945   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:30.464966   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:30.465053   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:30.471028   67309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1212 21:31:30.471381   67309 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:31:30.471869   67309 main.go:141] libmachine: Using API Version  1
	I1212 21:31:30.471880   67309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:31:30.472251   67309 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:31:30.472440   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:31:30.473946   67309 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:31:30.474146   67309 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:31:30.474155   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:31:30.474165   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:31:30.476630   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.476936   67309 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:30:56 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:31:30.476957   67309 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:31:30.477159   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:31:30.477308   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:31:30.477457   67309 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:31:30.477549   67309 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:31:30.559024   67309 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:31:30.559112   67309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:30.559196   67309 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:31:30.591760   67309 api_server.go:72] duration metric: took 182.996353ms to wait for apiserver process to appear ...
	I1212 21:31:30.591783   67309 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:31:30.591810   67309 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1212 21:31:30.605665   67309 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I1212 21:31:30.609910   67309 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:31:30.609932   67309 api_server.go:131] duration metric: took 18.143181ms to wait for apiserver health ...
	I1212 21:31:30.609939   67309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:31:30.627870   67309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:31:30.637210   67309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:31:30.637833   67309 system_pods.go:59] 8 kube-system pods found
	I1212 21:31:30.637861   67309 system_pods.go:61] "coredns-76f75df574-cgjwb" [b02267cd-02d7-440d-851e-8342fd419692] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:31:30.637871   67309 system_pods.go:61] "etcd-newest-cni-422706" [6b21d8d5-157c-417a-b47f-419955619e3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:31:30.637883   67309 system_pods.go:61] "kube-apiserver-newest-cni-422706" [f36e1fcb-2aaa-4b93-a242-5ebaa9e1ab1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:31:30.637896   67309 system_pods.go:61] "kube-controller-manager-newest-cni-422706" [c1ca2e25-9fd5-4d27-b829-1745dcf91120] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:31:30.637908   67309 system_pods.go:61] "kube-proxy-chfvw" [a893f7b0-f36e-45df-9f3b-855ed582b29d] Running
	I1212 21:31:30.637919   67309 system_pods.go:61] "kube-scheduler-newest-cni-422706" [c452c431-c884-4d2a-a2d3-a386015ee529] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:31:30.637932   67309 system_pods.go:61] "metrics-server-57f55c9bc5-hlb6j" [3b8845c1-2c27-4aae-a62e-d211f6c78623] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:31:30.637943   67309 system_pods.go:61] "storage-provisioner" [ad7dd48a-e001-4f5f-baa8-695c65b78256] Running
	I1212 21:31:30.637955   67309 system_pods.go:74] duration metric: took 28.009022ms to wait for pod list to return data ...
	I1212 21:31:30.637968   67309 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:31:30.643282   67309 default_sa.go:45] found service account: "default"
	I1212 21:31:30.643310   67309 default_sa.go:55] duration metric: took 5.333389ms for default service account to be created ...
	I1212 21:31:30.643323   67309 kubeadm.go:581] duration metric: took 234.562517ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1212 21:31:30.643352   67309 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:31:30.645076   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 21:31:30.645098   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 21:31:30.653652   67309 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:31:30.653675   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:31:30.657257   67309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:31:30.657287   67309 node_conditions.go:123] node cpu capacity is 2
	I1212 21:31:30.657299   67309 node_conditions.go:105] duration metric: took 13.940836ms to run NodePressure ...
	I1212 21:31:30.657314   67309 start.go:228] waiting for startup goroutines ...
	I1212 21:31:30.730704   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 21:31:30.730736   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 21:31:30.807746   67309 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:31:30.807771   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:31:30.844478   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 21:31:30.844507   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 21:31:30.893472   67309 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:31:30.893506   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:31:30.909757   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 21:31:30.909779   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 21:31:30.963150   67309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:31:30.978017   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 21:31:30.978046   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 21:31:31.076611   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 21:31:31.076637   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 21:31:31.134332   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 21:31:31.134360   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 21:31:31.183780   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 21:31:31.183809   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 21:31:31.269219   67309 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:31:31.269255   67309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 21:31:31.333368   67309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 21:31:32.698912   67309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.070996087s)
	I1212 21:31:32.698956   67309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.061718445s)
	I1212 21:31:32.698965   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.698978   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.698980   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.698989   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.699290   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.699331   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.699341   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.699349   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.699408   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.699425   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.699436   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.699445   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.699584   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.699625   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.699636   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Closing plugin on server side
	I1212 21:31:32.699703   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.699707   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Closing plugin on server side
	I1212 21:31:32.699727   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.716580   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.716605   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.716916   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Closing plugin on server side
	I1212 21:31:32.716949   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.716965   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.834652   67309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.871451771s)
	I1212 21:31:32.834712   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.834725   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.835021   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.835041   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.835052   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.835073   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.835346   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.835375   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.835393   67309 addons.go:467] Verifying addon metrics-server=true in "newest-cni-422706"
	I1212 21:31:32.835415   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Closing plugin on server side
	I1212 21:31:32.958939   67309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.6255212s)
	I1212 21:31:32.958989   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.959005   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.959336   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.959361   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.959372   67309 main.go:141] libmachine: Making call to close driver server
	I1212 21:31:32.959393   67309 main.go:141] libmachine: (newest-cni-422706) Calling .Close
	I1212 21:31:32.959337   67309 main.go:141] libmachine: (newest-cni-422706) DBG | Closing plugin on server side
	I1212 21:31:32.959628   67309 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:31:32.959644   67309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:31:32.961232   67309 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-422706 addons enable metrics-server	
	
	
	I1212 21:31:32.962461   67309 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1212 21:31:32.963675   67309 addons.go:502] enable addons completed in 2.559677815s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1212 21:31:32.963710   67309 start.go:233] waiting for cluster config update ...
	I1212 21:31:32.963726   67309 start.go:242] writing updated cluster config ...
	I1212 21:31:32.964136   67309 ssh_runner.go:195] Run: rm -f paused
	I1212 21:31:33.025749   67309 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 21:31:33.027171   67309 out.go:177] * Done! kubectl is now configured to use "newest-cni-422706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:10:03 UTC, ends at Tue 2023-12-12 21:31:38 UTC. --
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.453097839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416698453083968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=22610a3c-b0d5-4d9d-b4be-10ec1fd13677 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.454200342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c1f174c-7beb-4bb5-8fb7-8216f4524216 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.454273650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c1f174c-7beb-4bb5-8fb7-8216f4524216 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.454597467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c1f174c-7beb-4bb5-8fb7-8216f4524216 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.496777876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=49c25c03-89c0-46ba-92c0-01dd6d024026 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.496864375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=49c25c03-89c0-46ba-92c0-01dd6d024026 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.498217225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=21f47573-9ec1-4693-a449-102ebf99ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.498577605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416698498566884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=21f47573-9ec1-4693-a449-102ebf99ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.499270339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9bba0983-8ca1-42ae-9903-819343150864 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.499394348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9bba0983-8ca1-42ae-9903-819343150864 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.499609298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9bba0983-8ca1-42ae-9903-819343150864 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.540948301Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c87f95d3-77c0-4e7c-8c92-108a2dfae66a name=/runtime.v1.RuntimeService/Version
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.541036992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c87f95d3-77c0-4e7c-8c92-108a2dfae66a name=/runtime.v1.RuntimeService/Version
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.542425237Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0bcca8db-ab7a-4461-9683-07d79193c261 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.542913587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416698542899648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0bcca8db-ab7a-4461-9683-07d79193c261 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.543537523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=90494403-64a1-4354-a786-8580c1b696a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.543611406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=90494403-64a1-4354-a786-8580c1b696a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.543894318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=90494403-64a1-4354-a786-8580c1b696a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.577334000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c6d06c6a-22bf-4074-8bf4-c1829f7345d7 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.577395671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c6d06c6a-22bf-4074-8bf4-c1829f7345d7 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.578930804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3fe5fe49-10d3-4580-afc3-d64c6cf3b925 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.579323919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416698579309943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3fe5fe49-10d3-4580-afc3-d64c6cf3b925 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.579901757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46152cc3-ac45-43a1-b6ff-10fbe9093353 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.579967425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46152cc3-ac45-43a1-b6ff-10fbe9093353 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:31:38 default-k8s-diff-port-171828 crio[726]: time="2023-12-12 21:31:38.580155902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415472326150540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c893e872464b52d6382e5d75c17ba00425a7bdc92184a6f27cf408b8c86c434c,PodSandboxId:481966ba028dd07ad582372bf5760702f71e3decd95596031188d4049dc5c0c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702415450772808342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2951bd10-8d18-4fbf-a012-312a24ff975d,},Annotations:map[string]string{io.kubernetes.container.hash: 444c7300,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478,PodSandboxId:0d8da62cfda8507038dbdd01ee00a164799f545a23d57b5215783b75bec6f37f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702415448997304733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b5jrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1089e305-a4ce-43d3-83cb-f754858297b3,},Annotations:map[string]string{io.kubernetes.container.hash: 58f7f280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399,PodSandboxId:b518f95b229fe2f7c2d03eb349691892ce3dc47fafd18a032a8c99e215300b44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702415441096604048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47qmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
93908813-508a-4c97-a20d-5d59a3e6befb,},Annotations:map[string]string{io.kubernetes.container.hash: 57ea3159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1,PodSandboxId:972104fa23ba04926acb8924c101e7f473186c8d04a0c02b28fc1952b4b0b65f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702415441059264257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
3a7c100-e7b7-4179-b821-d191741a66fb,},Annotations:map[string]string{io.kubernetes.container.hash: 375c49e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d,PodSandboxId:2cd11974b193c363fbf59e755977067410f653c885a57e299c42f49631198518,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702415435529746357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 505d35a2f109d457b405abf965bda356,},An
notations:map[string]string{io.kubernetes.container.hash: c730a191,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487,PodSandboxId:435b602d77216231c64a11f542bd30cb0dbdff53a23c55953ea16b92fe8cde70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702415435352898145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef9d8694a6b3de3fb85bd787d8246c1,},An
notations:map[string]string{io.kubernetes.container.hash: 4a7cb19c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0,PodSandboxId:830461dcb4c5bdee9f5f235397e07ea47b924ed59fb4df060d477c95489f2c42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702415435218913580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc288a48608e5707030f249b3df56ecb,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa,PodSandboxId:da2ac77f29ee89249b888e931ff104d28868339593ed6ed9261edffa5967fba5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702415435156440202,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-171828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
abdda30a4688164c7ce468a1c385a51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46152cc3-ac45-43a1-b6ff-10fbe9093353 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea6928f21cd25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   972104fa23ba0       storage-provisioner
	c893e872464b5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   481966ba028dd       busybox
	d5ecf165d7cb6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   0d8da62cfda85       coredns-5dd5756b68-b5jrg
	5c1bc3f3622da       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   b518f95b229fe       kube-proxy-47qmb
	ca0e02bbed658       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   972104fa23ba0       storage-provisioner
	45c49920e4072       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   2cd11974b193c       etcd-default-k8s-diff-port-171828
	27b89c10d83be       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   435b602d77216       kube-apiserver-default-k8s-diff-port-171828
	cd9a395f80d15       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   830461dcb4c5b       kube-scheduler-default-k8s-diff-port-171828
	b4c8c82cfc4cf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   da2ac77f29ee8       kube-controller-manager-default-k8s-diff-port-171828
	
	
	==> coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35738 - 7522 "HINFO IN 478030668955208960.6356851381917873108. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008753741s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-171828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-171828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=default-k8s-diff-port-171828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_02_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:02:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-171828
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 21:31:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:31:36 +0000   Tue, 12 Dec 2023 21:02:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:31:36 +0000   Tue, 12 Dec 2023 21:02:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:31:36 +0000   Tue, 12 Dec 2023 21:02:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:31:36 +0000   Tue, 12 Dec 2023 21:10:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.253
	  Hostname:    default-k8s-diff-port-171828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e54c995e9bd4393816bbe98760d69c0
	  System UUID:                9e54c995-e9bd-4393-816b-be98760d69c0
	  Boot ID:                    462fdaf8-d418-495c-9331-be8ebcbdc08f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-b5jrg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-171828                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-171828             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-171828    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-47qmb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-171828             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-fqrqh                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-171828 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-171828 event: Registered Node default-k8s-diff-port-171828 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-171828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-171828 event: Registered Node default-k8s-diff-port-171828 in Controller
	
	
	==> dmesg <==
	[Dec12 21:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.086663] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.754683] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec12 21:10] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155599] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.564849] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.137705] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.122665] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.167439] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.133838] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.239599] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +18.406432] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +14.169933] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.016652] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] <==
	{"level":"info","ts":"2023-12-12T21:20:38.162587Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":890}
	{"level":"info","ts":"2023-12-12T21:20:38.165797Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":890,"took":"2.808074ms","hash":3286688731}
	{"level":"info","ts":"2023-12-12T21:20:38.16587Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3286688731,"revision":890,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T21:25:38.170613Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1132}
	{"level":"info","ts":"2023-12-12T21:25:38.172393Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1132,"took":"1.442736ms","hash":2665612513}
	{"level":"info","ts":"2023-12-12T21:25:38.17247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2665612513,"revision":1132,"compact-revision":890}
	{"level":"info","ts":"2023-12-12T21:30:10.723964Z","caller":"traceutil/trace.go:171","msg":"trace[429396038] linearizableReadLoop","detail":"{readStateIndex:1876; appliedIndex:1875; }","duration":"192.084112ms","start":"2023-12-12T21:30:10.531842Z","end":"2023-12-12T21:30:10.723926Z","steps":["trace[429396038] 'read index received'  (duration: 191.760262ms)","trace[429396038] 'applied index is now lower than readState.Index'  (duration: 323.188µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T21:30:10.724243Z","caller":"traceutil/trace.go:171","msg":"trace[11584944] transaction","detail":"{read_only:false; response_revision:1595; number_of_response:1; }","duration":"264.939453ms","start":"2023-12-12T21:30:10.459284Z","end":"2023-12-12T21:30:10.724223Z","steps":["trace[11584944] 'process raft request'  (duration: 264.36963ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:30:10.724381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.486001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T21:30:10.724961Z","caller":"traceutil/trace.go:171","msg":"trace[2117253774] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1595; }","duration":"193.136379ms","start":"2023-12-12T21:30:10.53181Z","end":"2023-12-12T21:30:10.724946Z","steps":["trace[2117253774] 'agreement among raft nodes before linearized reading'  (duration: 192.463943ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:30:11.48237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.198244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T21:30:11.482532Z","caller":"traceutil/trace.go:171","msg":"trace[2003172563] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:1596; }","duration":"207.396164ms","start":"2023-12-12T21:30:11.27512Z","end":"2023-12-12T21:30:11.482516Z","steps":["trace[2003172563] 'count revisions from in-memory index tree'  (duration: 207.009291ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:30:11.879166Z","caller":"traceutil/trace.go:171","msg":"trace[511953777] transaction","detail":"{read_only:false; response_revision:1597; number_of_response:1; }","duration":"112.248899ms","start":"2023-12-12T21:30:11.766899Z","end":"2023-12-12T21:30:11.879148Z","steps":["trace[511953777] 'process raft request'  (duration: 111.668577ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T21:30:38.180023Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1376}
	{"level":"info","ts":"2023-12-12T21:30:38.182471Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1376,"took":"1.950232ms","hash":4019872562}
	{"level":"info","ts":"2023-12-12T21:30:38.182577Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4019872562,"revision":1376,"compact-revision":1132}
	{"level":"info","ts":"2023-12-12T21:31:11.589156Z","caller":"traceutil/trace.go:171","msg":"trace[848598941] transaction","detail":"{read_only:false; response_revision:1645; number_of_response:1; }","duration":"505.016193ms","start":"2023-12-12T21:31:11.084126Z","end":"2023-12-12T21:31:11.589143Z","steps":["trace[848598941] 'process raft request'  (duration: 504.910496ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:31:11.589638Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:31:11.084113Z","time spent":"505.092984ms","remote":"127.0.0.1:44952","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1644 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-12T21:31:11.837231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.075866ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7885112865970931154 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6d6d8c5fde0f45d1>","response":"size:40"}
	{"level":"warn","ts":"2023-12-12T21:31:11.83732Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:31:11.084407Z","time spent":"752.910693ms","remote":"127.0.0.1:44920","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-12-12T21:31:12.086369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.532213ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7885112865970931155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.253\" mod_revision:1638 > success:<request_put:<key:\"/registry/masterleases/192.168.72.253\" value_size:67 lease:7885112865970931153 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.253\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T21:31:12.086489Z","caller":"traceutil/trace.go:171","msg":"trace[1323621206] linearizableReadLoop","detail":"{readStateIndex:1942; appliedIndex:1941; }","duration":"138.197383ms","start":"2023-12-12T21:31:11.948282Z","end":"2023-12-12T21:31:12.086479Z","steps":["trace[1323621206] 'read index received'  (duration: 11.367509ms)","trace[1323621206] 'applied index is now lower than readState.Index'  (duration: 126.828938ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T21:31:12.086577Z","caller":"traceutil/trace.go:171","msg":"trace[549970391] transaction","detail":"{read_only:false; response_revision:1646; number_of_response:1; }","duration":"248.317718ms","start":"2023-12-12T21:31:11.838253Z","end":"2023-12-12T21:31:12.08657Z","steps":["trace[549970391] 'process raft request'  (duration: 121.531112ms)","trace[549970391] 'compare'  (duration: 126.427037ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T21:31:12.086973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.697195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T21:31:12.087345Z","caller":"traceutil/trace.go:171","msg":"trace[1857956629] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1646; }","duration":"139.076668ms","start":"2023-12-12T21:31:11.94826Z","end":"2023-12-12T21:31:12.087336Z","steps":["trace[1857956629] 'agreement among raft nodes before linearized reading'  (duration: 138.674159ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:31:38 up 21 min,  0 users,  load average: 0.34, 0.17, 0.16
	Linux default-k8s-diff-port-171828 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] <==
	W1212 21:28:40.811376       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:28:40.811442       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:28:40.811476       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:29:39.697787       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 21:30:39.698163       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:30:39.814178       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:30:39.814299       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:30:39.814829       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 21:30:40.815116       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:30:40.815217       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:30:40.815244       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:30:40.815362       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:30:40.815498       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:30:40.816792       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:31:11.590402       1 trace.go:236] Trace[252251321]: "Update" accept:application/json, */*,audit-id:6ba78039-85c0-4b59-a5da-c231112f8ae8,client:192.168.72.253,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (12-Dec-2023 21:31:11.082) (total time: 508ms):
	Trace[252251321]: ["GuaranteedUpdate etcd3" audit-id:6ba78039-85c0-4b59-a5da-c231112f8ae8,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 507ms (21:31:11.082)
	Trace[252251321]:  ---"Txn call completed" 506ms (21:31:11.590)]
	Trace[252251321]: [508.099376ms] [508.099376ms] END
	I1212 21:31:12.087988       1 trace.go:236] Trace[461787538]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.72.253,type:*v1.Endpoints,resource:apiServerIPInfo (12-Dec-2023 21:31:11.083) (total time: 1004ms):
	Trace[461787538]: ---"Transaction prepared" 753ms (21:31:11.837)
	Trace[461787538]: ---"Txn call completed" 250ms (21:31:12.087)
	Trace[461787538]: [1.004514219s] [1.004514219s] END
	
	
	==> kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] <==
	I1212 21:25:53.719536       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:26:23.161269       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:26:23.727845       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:26:53.166667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:26:53.739529       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:27:02.093187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="287.098µs"
	I1212 21:27:17.087905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="133.619µs"
	E1212 21:27:23.172938       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:27:23.751949       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:27:53.178550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:27:53.764395       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:28:23.184959       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:28:23.773639       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:28:53.190942       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:28:53.784143       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:29:23.197598       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:29:23.793935       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:29:53.205214       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:29:53.814066       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:30:23.210944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:30:23.824385       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:30:53.216372       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:30:53.835297       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:31:23.223072       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:31:23.846263       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] <==
	I1212 21:10:41.533568       1 server_others.go:69] "Using iptables proxy"
	I1212 21:10:41.570051       1 node.go:141] Successfully retrieved node IP: 192.168.72.253
	I1212 21:10:41.641796       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 21:10:41.641873       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 21:10:41.645683       1 server_others.go:152] "Using iptables Proxier"
	I1212 21:10:41.645840       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 21:10:41.646053       1 server.go:846] "Version info" version="v1.28.4"
	I1212 21:10:41.646092       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:10:41.646875       1 config.go:188] "Starting service config controller"
	I1212 21:10:41.646929       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 21:10:41.646976       1 config.go:97] "Starting endpoint slice config controller"
	I1212 21:10:41.646992       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 21:10:41.648503       1 config.go:315] "Starting node config controller"
	I1212 21:10:41.648557       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 21:10:41.747860       1 shared_informer.go:318] Caches are synced for service config
	I1212 21:10:41.748026       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 21:10:41.749429       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] <==
	I1212 21:10:37.745885       1 serving.go:348] Generated self-signed cert in-memory
	W1212 21:10:39.766679       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 21:10:39.766859       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 21:10:39.766898       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 21:10:39.766922       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 21:10:39.791293       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 21:10:39.791382       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:10:39.799900       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 21:10:39.800195       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:10:39.800243       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 21:10:39.800276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 21:10:39.900785       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:10:03 UTC, ends at Tue 2023-12-12 21:31:39 UTC. --
	Dec 12 21:29:18 default-k8s-diff-port-171828 kubelet[931]: E1212 21:29:18.073536     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:29:29 default-k8s-diff-port-171828 kubelet[931]: E1212 21:29:29.071991     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:29:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:29:34.089343     931 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:29:34 default-k8s-diff-port-171828 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:29:34 default-k8s-diff-port-171828 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:29:34 default-k8s-diff-port-171828 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:29:41 default-k8s-diff-port-171828 kubelet[931]: E1212 21:29:41.072528     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:29:53 default-k8s-diff-port-171828 kubelet[931]: E1212 21:29:53.072991     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:30:08 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:08.073305     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:30:23 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:23.072592     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:30:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:34.088096     931 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:30:34 default-k8s-diff-port-171828 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:30:34 default-k8s-diff-port-171828 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:30:34 default-k8s-diff-port-171828 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:30:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:34.108621     931 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 12 21:30:36 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:36.072304     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:30:47 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:47.072379     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:30:58 default-k8s-diff-port-171828 kubelet[931]: E1212 21:30:58.072091     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:31:09 default-k8s-diff-port-171828 kubelet[931]: E1212 21:31:09.072899     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:31:24 default-k8s-diff-port-171828 kubelet[931]: E1212 21:31:24.073219     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	Dec 12 21:31:34 default-k8s-diff-port-171828 kubelet[931]: E1212 21:31:34.089642     931 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:31:34 default-k8s-diff-port-171828 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:31:34 default-k8s-diff-port-171828 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:31:34 default-k8s-diff-port-171828 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:31:38 default-k8s-diff-port-171828 kubelet[931]: E1212 21:31:38.071865     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fqrqh" podUID="633d3468-a8df-4c9b-9bab-8c26ce998832"
	
	
	==> storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] <==
	I1212 21:10:41.287669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 21:11:11.294001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] <==
	I1212 21:11:12.477851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:11:12.489217       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:11:12.489353       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:11:29.898386       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:11:29.901217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-171828_ab619959-6c2b-45d8-8e13-36bb7dad0675!
	I1212 21:11:29.902351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8ac4db0-8089-47ee-a188-aec6180ea709", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-171828_ab619959-6c2b-45d8-8e13-36bb7dad0675 became leader
	I1212 21:11:30.001795       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-171828_ab619959-6c2b-45d8-8e13-36bb7dad0675!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fqrqh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 describe pod metrics-server-57f55c9bc5-fqrqh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-171828 describe pod metrics-server-57f55c9bc5-fqrqh: exit status 1 (64.544836ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fqrqh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-171828 describe pod metrics-server-57f55c9bc5-fqrqh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (448.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (300.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:25:20.139101   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-343495 -n no-preload-343495
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:30:14.164697516 +0000 UTC m=+5615.384870086
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-343495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-343495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.512µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-343495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-343495 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-343495 logs -n 25: (1.362745754s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo find                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo crio                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-690675                                       | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:29 UTC | 12 Dec 23 21:29 UTC |
	| start   | -p newest-cni-422706 --memory=2200 --alsologtostderr   | newest-cni-422706            | jenkins | v1.32.0 | 12 Dec 23 21:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:29:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:29:37.192234   66588 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:29:37.192507   66588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:29:37.192517   66588 out.go:309] Setting ErrFile to fd 2...
	I1212 21:29:37.192525   66588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:29:37.192737   66588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:29:37.193425   66588 out.go:303] Setting JSON to false
	I1212 21:29:37.194407   66588 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7931,"bootTime":1702408646,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:29:37.194472   66588 start.go:138] virtualization: kvm guest
	I1212 21:29:37.196842   66588 out.go:177] * [newest-cni-422706] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:29:37.198297   66588 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:29:37.199487   66588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:29:37.198365   66588 notify.go:220] Checking for updates...
	I1212 21:29:37.202039   66588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:29:37.203588   66588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:29:37.205100   66588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:29:37.206393   66588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:29:37.208140   66588 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:29:37.208236   66588 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:29:37.208319   66588 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:29:37.208395   66588 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:29:37.247124   66588 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 21:29:37.248509   66588 start.go:298] selected driver: kvm2
	I1212 21:29:37.248534   66588 start.go:902] validating driver "kvm2" against <nil>
	I1212 21:29:37.248551   66588 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:29:37.249610   66588 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:29:37.249712   66588 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:29:37.266564   66588 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:29:37.266615   66588 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1212 21:29:37.266638   66588 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 21:29:37.266850   66588 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 21:29:37.266926   66588 cni.go:84] Creating CNI manager for ""
	I1212 21:29:37.266939   66588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:29:37.266950   66588 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 21:29:37.266958   66588 start_flags.go:323] config:
	{Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:29:37.267091   66588 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:29:37.269312   66588 out.go:177] * Starting control plane node newest-cni-422706 in cluster newest-cni-422706
	I1212 21:29:37.270499   66588 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:29:37.270541   66588 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 21:29:37.270548   66588 cache.go:56] Caching tarball of preloaded images
	I1212 21:29:37.270615   66588 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:29:37.270626   66588 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 21:29:37.270737   66588 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/config.json ...
	I1212 21:29:37.270762   66588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/config.json: {Name:mk0241240ce56a3427daa37fbe173ec4673c9194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:29:37.270892   66588 start.go:365] acquiring machines lock for newest-cni-422706: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:29:37.270924   66588 start.go:369] acquired machines lock for "newest-cni-422706" in 17.951µs
	I1212 21:29:37.270939   66588 start.go:93] Provisioning new machine with config: &{Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:29:37.271002   66588 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 21:29:37.272563   66588 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 21:29:37.272719   66588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:29:37.272761   66588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:29:37.288370   66588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1212 21:29:37.288905   66588 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:29:37.289437   66588 main.go:141] libmachine: Using API Version  1
	I1212 21:29:37.289459   66588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:29:37.289860   66588 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:29:37.290067   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:29:37.290211   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:29:37.290380   66588 start.go:159] libmachine.API.Create for "newest-cni-422706" (driver="kvm2")
	I1212 21:29:37.290415   66588 client.go:168] LocalClient.Create starting
	I1212 21:29:37.290451   66588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem
	I1212 21:29:37.290498   66588 main.go:141] libmachine: Decoding PEM data...
	I1212 21:29:37.290528   66588 main.go:141] libmachine: Parsing certificate...
	I1212 21:29:37.290597   66588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem
	I1212 21:29:37.290626   66588 main.go:141] libmachine: Decoding PEM data...
	I1212 21:29:37.290654   66588 main.go:141] libmachine: Parsing certificate...
	I1212 21:29:37.290682   66588 main.go:141] libmachine: Running pre-create checks...
	I1212 21:29:37.290695   66588 main.go:141] libmachine: (newest-cni-422706) Calling .PreCreateCheck
	I1212 21:29:37.291043   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetConfigRaw
	I1212 21:29:37.291463   66588 main.go:141] libmachine: Creating machine...
	I1212 21:29:37.291478   66588 main.go:141] libmachine: (newest-cni-422706) Calling .Create
	I1212 21:29:37.291582   66588 main.go:141] libmachine: (newest-cni-422706) Creating KVM machine...
	I1212 21:29:37.292814   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found existing default KVM network
	I1212 21:29:37.294285   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:37.294162   66610 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025e050}
	I1212 21:29:37.299510   66588 main.go:141] libmachine: (newest-cni-422706) DBG | trying to create private KVM network mk-newest-cni-422706 192.168.39.0/24...
	I1212 21:29:37.374171   66588 main.go:141] libmachine: (newest-cni-422706) DBG | private KVM network mk-newest-cni-422706 192.168.39.0/24 created
	I1212 21:29:37.374219   66588 main.go:141] libmachine: (newest-cni-422706) Setting up store path in /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706 ...
	I1212 21:29:37.374235   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:37.374132   66610 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:29:37.374262   66588 main.go:141] libmachine: (newest-cni-422706) Building disk image from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 21:29:37.374288   66588 main.go:141] libmachine: (newest-cni-422706) Downloading /home/jenkins/minikube-integration/17734-9188/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 21:29:37.587853   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:37.587710   66610 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa...
	I1212 21:29:37.763860   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:37.763721   66610 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/newest-cni-422706.rawdisk...
	I1212 21:29:37.763892   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Writing magic tar header
	I1212 21:29:37.763920   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Writing SSH key tar header
	I1212 21:29:37.763935   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:37.763840   66610 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706 ...
	I1212 21:29:37.763998   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706
	I1212 21:29:37.764026   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube/machines
	I1212 21:29:37.764039   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:29:37.764065   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17734-9188
	I1212 21:29:37.764084   66588 main.go:141] libmachine: (newest-cni-422706) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706 (perms=drwx------)
	I1212 21:29:37.764100   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 21:29:37.764118   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home/jenkins
	I1212 21:29:37.764133   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Checking permissions on dir: /home
	I1212 21:29:37.764150   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Skipping /home - not owner
	I1212 21:29:37.764166   66588 main.go:141] libmachine: (newest-cni-422706) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube/machines (perms=drwxr-xr-x)
	I1212 21:29:37.764180   66588 main.go:141] libmachine: (newest-cni-422706) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188/.minikube (perms=drwxr-xr-x)
	I1212 21:29:37.764197   66588 main.go:141] libmachine: (newest-cni-422706) Setting executable bit set on /home/jenkins/minikube-integration/17734-9188 (perms=drwxrwxr-x)
	I1212 21:29:37.764215   66588 main.go:141] libmachine: (newest-cni-422706) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 21:29:37.764229   66588 main.go:141] libmachine: (newest-cni-422706) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 21:29:37.764240   66588 main.go:141] libmachine: (newest-cni-422706) Creating domain...
	I1212 21:29:37.765490   66588 main.go:141] libmachine: (newest-cni-422706) define libvirt domain using xml: 
	I1212 21:29:37.765520   66588 main.go:141] libmachine: (newest-cni-422706) <domain type='kvm'>
	I1212 21:29:37.765533   66588 main.go:141] libmachine: (newest-cni-422706)   <name>newest-cni-422706</name>
	I1212 21:29:37.765546   66588 main.go:141] libmachine: (newest-cni-422706)   <memory unit='MiB'>2200</memory>
	I1212 21:29:37.765571   66588 main.go:141] libmachine: (newest-cni-422706)   <vcpu>2</vcpu>
	I1212 21:29:37.765582   66588 main.go:141] libmachine: (newest-cni-422706)   <features>
	I1212 21:29:37.765597   66588 main.go:141] libmachine: (newest-cni-422706)     <acpi/>
	I1212 21:29:37.765609   66588 main.go:141] libmachine: (newest-cni-422706)     <apic/>
	I1212 21:29:37.765622   66588 main.go:141] libmachine: (newest-cni-422706)     <pae/>
	I1212 21:29:37.765634   66588 main.go:141] libmachine: (newest-cni-422706)     
	I1212 21:29:37.765650   66588 main.go:141] libmachine: (newest-cni-422706)   </features>
	I1212 21:29:37.765662   66588 main.go:141] libmachine: (newest-cni-422706)   <cpu mode='host-passthrough'>
	I1212 21:29:37.765674   66588 main.go:141] libmachine: (newest-cni-422706)   
	I1212 21:29:37.765685   66588 main.go:141] libmachine: (newest-cni-422706)   </cpu>
	I1212 21:29:37.765697   66588 main.go:141] libmachine: (newest-cni-422706)   <os>
	I1212 21:29:37.765712   66588 main.go:141] libmachine: (newest-cni-422706)     <type>hvm</type>
	I1212 21:29:37.765750   66588 main.go:141] libmachine: (newest-cni-422706)     <boot dev='cdrom'/>
	I1212 21:29:37.765775   66588 main.go:141] libmachine: (newest-cni-422706)     <boot dev='hd'/>
	I1212 21:29:37.765788   66588 main.go:141] libmachine: (newest-cni-422706)     <bootmenu enable='no'/>
	I1212 21:29:37.765801   66588 main.go:141] libmachine: (newest-cni-422706)   </os>
	I1212 21:29:37.765828   66588 main.go:141] libmachine: (newest-cni-422706)   <devices>
	I1212 21:29:37.765843   66588 main.go:141] libmachine: (newest-cni-422706)     <disk type='file' device='cdrom'>
	I1212 21:29:37.765870   66588 main.go:141] libmachine: (newest-cni-422706)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/boot2docker.iso'/>
	I1212 21:29:37.765885   66588 main.go:141] libmachine: (newest-cni-422706)       <target dev='hdc' bus='scsi'/>
	I1212 21:29:37.765898   66588 main.go:141] libmachine: (newest-cni-422706)       <readonly/>
	I1212 21:29:37.765912   66588 main.go:141] libmachine: (newest-cni-422706)     </disk>
	I1212 21:29:37.765925   66588 main.go:141] libmachine: (newest-cni-422706)     <disk type='file' device='disk'>
	I1212 21:29:37.765942   66588 main.go:141] libmachine: (newest-cni-422706)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 21:29:37.765960   66588 main.go:141] libmachine: (newest-cni-422706)       <source file='/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/newest-cni-422706.rawdisk'/>
	I1212 21:29:37.765975   66588 main.go:141] libmachine: (newest-cni-422706)       <target dev='hda' bus='virtio'/>
	I1212 21:29:37.765988   66588 main.go:141] libmachine: (newest-cni-422706)     </disk>
	I1212 21:29:37.766003   66588 main.go:141] libmachine: (newest-cni-422706)     <interface type='network'>
	I1212 21:29:37.766016   66588 main.go:141] libmachine: (newest-cni-422706)       <source network='mk-newest-cni-422706'/>
	I1212 21:29:37.766031   66588 main.go:141] libmachine: (newest-cni-422706)       <model type='virtio'/>
	I1212 21:29:37.766043   66588 main.go:141] libmachine: (newest-cni-422706)     </interface>
	I1212 21:29:37.766058   66588 main.go:141] libmachine: (newest-cni-422706)     <interface type='network'>
	I1212 21:29:37.766072   66588 main.go:141] libmachine: (newest-cni-422706)       <source network='default'/>
	I1212 21:29:37.766086   66588 main.go:141] libmachine: (newest-cni-422706)       <model type='virtio'/>
	I1212 21:29:37.766098   66588 main.go:141] libmachine: (newest-cni-422706)     </interface>
	I1212 21:29:37.766121   66588 main.go:141] libmachine: (newest-cni-422706)     <serial type='pty'>
	I1212 21:29:37.766144   66588 main.go:141] libmachine: (newest-cni-422706)       <target port='0'/>
	I1212 21:29:37.766158   66588 main.go:141] libmachine: (newest-cni-422706)     </serial>
	I1212 21:29:37.766169   66588 main.go:141] libmachine: (newest-cni-422706)     <console type='pty'>
	I1212 21:29:37.766183   66588 main.go:141] libmachine: (newest-cni-422706)       <target type='serial' port='0'/>
	I1212 21:29:37.766194   66588 main.go:141] libmachine: (newest-cni-422706)     </console>
	I1212 21:29:37.766212   66588 main.go:141] libmachine: (newest-cni-422706)     <rng model='virtio'>
	I1212 21:29:37.766228   66588 main.go:141] libmachine: (newest-cni-422706)       <backend model='random'>/dev/random</backend>
	I1212 21:29:37.766242   66588 main.go:141] libmachine: (newest-cni-422706)     </rng>
	I1212 21:29:37.766254   66588 main.go:141] libmachine: (newest-cni-422706)     
	I1212 21:29:37.766266   66588 main.go:141] libmachine: (newest-cni-422706)     
	I1212 21:29:37.766277   66588 main.go:141] libmachine: (newest-cni-422706)   </devices>
	I1212 21:29:37.766298   66588 main.go:141] libmachine: (newest-cni-422706) </domain>
	I1212 21:29:37.766319   66588 main.go:141] libmachine: (newest-cni-422706) 
	I1212 21:29:37.770761   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:d9:96:8e in network default
	I1212 21:29:37.771409   66588 main.go:141] libmachine: (newest-cni-422706) Ensuring networks are active...
	I1212 21:29:37.771428   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:37.772012   66588 main.go:141] libmachine: (newest-cni-422706) Ensuring network default is active
	I1212 21:29:37.772405   66588 main.go:141] libmachine: (newest-cni-422706) Ensuring network mk-newest-cni-422706 is active
	I1212 21:29:37.772955   66588 main.go:141] libmachine: (newest-cni-422706) Getting domain xml...
	I1212 21:29:37.773720   66588 main.go:141] libmachine: (newest-cni-422706) Creating domain...
	I1212 21:29:39.081170   66588 main.go:141] libmachine: (newest-cni-422706) Waiting to get IP...
	I1212 21:29:39.082181   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:39.082684   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:39.082715   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:39.082646   66610 retry.go:31] will retry after 219.155598ms: waiting for machine to come up
	I1212 21:29:39.303184   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:39.303714   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:39.303745   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:39.303657   66610 retry.go:31] will retry after 242.975783ms: waiting for machine to come up
	I1212 21:29:39.548307   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:39.548791   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:39.548819   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:39.548742   66610 retry.go:31] will retry after 309.128149ms: waiting for machine to come up
	I1212 21:29:39.858944   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:39.859445   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:39.859477   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:39.859398   66610 retry.go:31] will retry after 535.169831ms: waiting for machine to come up
	I1212 21:29:40.396239   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:40.396823   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:40.396852   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:40.396772   66610 retry.go:31] will retry after 616.909619ms: waiting for machine to come up
	I1212 21:29:41.015542   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:41.016075   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:41.016107   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:41.015995   66610 retry.go:31] will retry after 845.229047ms: waiting for machine to come up
	I1212 21:29:41.862619   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:41.863207   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:41.863250   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:41.863144   66610 retry.go:31] will retry after 791.199641ms: waiting for machine to come up
	I1212 21:29:42.655607   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:42.656135   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:42.656164   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:42.656071   66610 retry.go:31] will retry after 1.402125182s: waiting for machine to come up
	I1212 21:29:44.060515   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:44.060989   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:44.061015   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:44.060964   66610 retry.go:31] will retry after 1.735850117s: waiting for machine to come up
	I1212 21:29:45.798006   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:45.798466   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:45.798497   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:45.798397   66610 retry.go:31] will retry after 1.615823782s: waiting for machine to come up
	I1212 21:29:47.416035   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:47.416489   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:47.416521   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:47.416429   66610 retry.go:31] will retry after 1.786183553s: waiting for machine to come up
	I1212 21:29:49.204594   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:49.205131   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:49.205238   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:49.205065   66610 retry.go:31] will retry after 3.259469644s: waiting for machine to come up
	I1212 21:29:52.466460   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:52.466828   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:52.466856   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:52.466805   66610 retry.go:31] will retry after 4.448938582s: waiting for machine to come up
	I1212 21:29:56.920333   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:29:56.920824   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find current IP address of domain newest-cni-422706 in network mk-newest-cni-422706
	I1212 21:29:56.920849   66588 main.go:141] libmachine: (newest-cni-422706) DBG | I1212 21:29:56.920782   66610 retry.go:31] will retry after 5.01121795s: waiting for machine to come up
	I1212 21:30:01.937011   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:01.937521   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has current primary IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:01.937548   66588 main.go:141] libmachine: (newest-cni-422706) Found IP for machine: 192.168.39.163
	I1212 21:30:01.937562   66588 main.go:141] libmachine: (newest-cni-422706) Reserving static IP address...
	I1212 21:30:01.937965   66588 main.go:141] libmachine: (newest-cni-422706) DBG | unable to find host DHCP lease matching {name: "newest-cni-422706", mac: "52:54:00:b4:d1:77", ip: "192.168.39.163"} in network mk-newest-cni-422706
	I1212 21:30:02.025895   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Getting to WaitForSSH function...
	I1212 21:30:02.025922   66588 main.go:141] libmachine: (newest-cni-422706) Reserved static IP address: 192.168.39.163
	I1212 21:30:02.025937   66588 main.go:141] libmachine: (newest-cni-422706) Waiting for SSH to be available...
	I1212 21:30:02.028963   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.029426   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.029504   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.029618   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Using SSH client type: external
	I1212 21:30:02.029677   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa (-rw-------)
	I1212 21:30:02.029729   66588 main.go:141] libmachine: (newest-cni-422706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:30:02.029752   66588 main.go:141] libmachine: (newest-cni-422706) DBG | About to run SSH command:
	I1212 21:30:02.029768   66588 main.go:141] libmachine: (newest-cni-422706) DBG | exit 0
	I1212 21:30:02.119737   66588 main.go:141] libmachine: (newest-cni-422706) DBG | SSH cmd err, output: <nil>: 
	I1212 21:30:02.120040   66588 main.go:141] libmachine: (newest-cni-422706) KVM machine creation complete!
	I1212 21:30:02.120327   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetConfigRaw
	I1212 21:30:02.120947   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:02.121157   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:02.121378   66588 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 21:30:02.121392   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetState
	I1212 21:30:02.122783   66588 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 21:30:02.122801   66588 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 21:30:02.122811   66588 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 21:30:02.122821   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.125347   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.125882   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.125915   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.126085   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:02.126277   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.126430   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.126576   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:02.126742   66588 main.go:141] libmachine: Using SSH client type: native
	I1212 21:30:02.127174   66588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:30:02.127188   66588 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 21:30:02.242832   66588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:02.242856   66588 main.go:141] libmachine: Detecting the provisioner...
	I1212 21:30:02.242864   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.245667   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.246015   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.246053   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.246243   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:02.246465   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.246653   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.246792   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:02.246951   66588 main.go:141] libmachine: Using SSH client type: native
	I1212 21:30:02.247306   66588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:30:02.247322   66588 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 21:30:02.364388   66588 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 21:30:02.364488   66588 main.go:141] libmachine: found compatible host: buildroot
	I1212 21:30:02.364505   66588 main.go:141] libmachine: Provisioning with buildroot...
	I1212 21:30:02.364520   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:30:02.364818   66588 buildroot.go:166] provisioning hostname "newest-cni-422706"
	I1212 21:30:02.364848   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:30:02.365024   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.367748   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.368165   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.368187   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.368379   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:02.368582   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.368742   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.368893   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:02.369058   66588 main.go:141] libmachine: Using SSH client type: native
	I1212 21:30:02.369418   66588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:30:02.369433   66588 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-422706 && echo "newest-cni-422706" | sudo tee /etc/hostname
	I1212 21:30:02.493900   66588 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-422706
	
	I1212 21:30:02.493932   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.497251   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.497597   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.497629   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.497786   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:02.498060   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.498233   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.498408   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:02.498592   66588 main.go:141] libmachine: Using SSH client type: native
	I1212 21:30:02.498933   66588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:30:02.498950   66588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-422706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-422706/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-422706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:30:02.625433   66588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:30:02.625480   66588 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:30:02.625518   66588 buildroot.go:174] setting up certificates
	I1212 21:30:02.625533   66588 provision.go:83] configureAuth start
	I1212 21:30:02.625548   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetMachineName
	I1212 21:30:02.625832   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:30:02.628740   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.629104   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.629131   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.629283   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.631573   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.632042   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.632071   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.632241   66588 provision.go:138] copyHostCerts
	I1212 21:30:02.632314   66588 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:30:02.632338   66588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:30:02.632453   66588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:30:02.632646   66588 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:30:02.632663   66588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:30:02.632708   66588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:30:02.632791   66588 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:30:02.632802   66588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:30:02.632838   66588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:30:02.632901   66588 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.newest-cni-422706 san=[192.168.39.163 192.168.39.163 localhost 127.0.0.1 minikube newest-cni-422706]
	I1212 21:30:02.818296   66588 provision.go:172] copyRemoteCerts
	I1212 21:30:02.818363   66588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:30:02.818385   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.821205   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.821530   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.821559   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.821778   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:02.822004   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.822163   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:02.822311   66588 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:30:02.910086   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:30:02.936565   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:30:02.963559   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:30:02.989239   66588 provision.go:86] duration metric: configureAuth took 363.69009ms
	I1212 21:30:02.989286   66588 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:30:02.989577   66588 config.go:182] Loaded profile config "newest-cni-422706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:30:02.989688   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:02.993056   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.993488   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:02.993521   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:02.993687   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:02.993930   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.994147   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:02.994316   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:02.994532   66588 main.go:141] libmachine: Using SSH client type: native
	I1212 21:30:02.994879   66588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:30:02.994902   66588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:30:03.326835   66588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:30:03.326882   66588 main.go:141] libmachine: Checking connection to Docker...
	I1212 21:30:03.326895   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetURL
	I1212 21:30:03.328399   66588 main.go:141] libmachine: (newest-cni-422706) DBG | Using libvirt version 6000000
	I1212 21:30:03.330797   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.331139   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.331171   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.331375   66588 main.go:141] libmachine: Docker is up and running!
	I1212 21:30:03.331392   66588 main.go:141] libmachine: Reticulating splines...
	I1212 21:30:03.331398   66588 client.go:171] LocalClient.Create took 26.040972166s
	I1212 21:30:03.331422   66588 start.go:167] duration metric: libmachine.API.Create for "newest-cni-422706" took 26.041041526s
	I1212 21:30:03.331435   66588 start.go:300] post-start starting for "newest-cni-422706" (driver="kvm2")
	I1212 21:30:03.331452   66588 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:30:03.331473   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:03.331734   66588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:30:03.331761   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:03.333893   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.334190   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.334210   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.334396   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:03.334572   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:03.334724   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:03.334861   66588 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:30:03.421963   66588 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:30:03.427259   66588 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:30:03.427286   66588 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:30:03.427360   66588 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:30:03.427452   66588 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:30:03.427568   66588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:30:03.436546   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:30:03.462613   66588 start.go:303] post-start completed in 131.161439ms
	I1212 21:30:03.462656   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetConfigRaw
	I1212 21:30:03.463200   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:30:03.465845   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.466316   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.466345   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.466665   66588 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/config.json ...
	I1212 21:30:03.466869   66588 start.go:128] duration metric: createHost completed in 26.195857505s
	I1212 21:30:03.466892   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:03.469101   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.469450   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.469489   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.469631   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:03.469819   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:03.469961   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:03.470136   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:03.470345   66588 main.go:141] libmachine: Using SSH client type: native
	I1212 21:30:03.470730   66588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1212 21:30:03.470745   66588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:30:03.584026   66588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702416603.567591774
	
	I1212 21:30:03.584056   66588 fix.go:206] guest clock: 1702416603.567591774
	I1212 21:30:03.584065   66588 fix.go:219] Guest: 2023-12-12 21:30:03.567591774 +0000 UTC Remote: 2023-12-12 21:30:03.466879758 +0000 UTC m=+26.326610640 (delta=100.712016ms)
	I1212 21:30:03.584090   66588 fix.go:190] guest clock delta is within tolerance: 100.712016ms
	I1212 21:30:03.584096   66588 start.go:83] releasing machines lock for "newest-cni-422706", held for 26.313163884s
	I1212 21:30:03.584119   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:03.584393   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:30:03.586975   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.587288   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.587365   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.587537   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:03.588009   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:03.588171   66588 main.go:141] libmachine: (newest-cni-422706) Calling .DriverName
	I1212 21:30:03.588258   66588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:30:03.588294   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:03.588400   66588 ssh_runner.go:195] Run: cat /version.json
	I1212 21:30:03.588429   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHHostname
	I1212 21:30:03.590918   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.591175   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.591333   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.591369   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.591546   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:03.591567   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:03.591573   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:03.591744   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:03.591827   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHPort
	I1212 21:30:03.591897   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:03.591983   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHKeyPath
	I1212 21:30:03.592070   66588 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:30:03.592117   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetSSHUsername
	I1212 21:30:03.592257   66588 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/newest-cni-422706/id_rsa Username:docker}
	I1212 21:30:03.705008   66588 ssh_runner.go:195] Run: systemctl --version
	I1212 21:30:03.711897   66588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:30:03.872687   66588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:30:03.879543   66588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:30:03.879628   66588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:30:03.895044   66588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:30:03.895070   66588 start.go:475] detecting cgroup driver to use...
	I1212 21:30:03.895138   66588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:30:03.911809   66588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:30:03.925317   66588 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:30:03.925379   66588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:30:03.938950   66588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:30:03.952466   66588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:30:04.070180   66588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:30:04.194473   66588 docker.go:219] disabling docker service ...
	I1212 21:30:04.194543   66588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:30:04.208961   66588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:30:04.221771   66588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:30:04.333783   66588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:30:04.458519   66588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:30:04.474448   66588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:30:04.493754   66588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:30:04.493822   66588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:30:04.504669   66588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:30:04.504736   66588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:30:04.515029   66588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:30:04.526137   66588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:30:04.536872   66588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:30:04.547787   66588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:30:04.557069   66588 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:30:04.557141   66588 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:30:04.571429   66588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:30:04.581206   66588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:30:04.699681   66588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:30:04.888515   66588 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:30:04.888609   66588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:30:04.896768   66588 start.go:543] Will wait 60s for crictl version
	I1212 21:30:04.896856   66588 ssh_runner.go:195] Run: which crictl
	I1212 21:30:04.901091   66588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:30:04.941041   66588 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:30:04.941128   66588 ssh_runner.go:195] Run: crio --version
	I1212 21:30:04.995123   66588 ssh_runner.go:195] Run: crio --version
	I1212 21:30:05.047592   66588 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:30:05.048944   66588 main.go:141] libmachine: (newest-cni-422706) Calling .GetIP
	I1212 21:30:05.051724   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:05.052184   66588 main.go:141] libmachine: (newest-cni-422706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:d1:77", ip: ""} in network mk-newest-cni-422706: {Iface:virbr4 ExpiryTime:2023-12-12 22:29:53 +0000 UTC Type:0 Mac:52:54:00:b4:d1:77 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:newest-cni-422706 Clientid:01:52:54:00:b4:d1:77}
	I1212 21:30:05.052218   66588 main.go:141] libmachine: (newest-cni-422706) DBG | domain newest-cni-422706 has defined IP address 192.168.39.163 and MAC address 52:54:00:b4:d1:77 in network mk-newest-cni-422706
	I1212 21:30:05.052435   66588 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:30:05.056687   66588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:05.070007   66588 localpath.go:92] copying /home/jenkins/minikube-integration/17734-9188/.minikube/client.crt -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/client.crt
	I1212 21:30:05.070177   66588 localpath.go:117] copying /home/jenkins/minikube-integration/17734-9188/.minikube/client.key -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/client.key
	I1212 21:30:05.072151   66588 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 21:30:05.073627   66588 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:30:05.073726   66588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:30:05.109953   66588 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:30:05.110032   66588 ssh_runner.go:195] Run: which lz4
	I1212 21:30:05.114389   66588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:30:05.119364   66588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:30:05.119399   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401739178 bytes)
	I1212 21:30:06.764957   66588 crio.go:444] Took 1.650617 seconds to copy over tarball
	I1212 21:30:06.765038   66588 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:30:09.649074   66588 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.883997963s)
	I1212 21:30:09.649105   66588 crio.go:451] Took 2.884119 seconds to extract the tarball
	I1212 21:30:09.649114   66588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:30:09.690928   66588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:30:09.775322   66588 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:30:09.775344   66588 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:30:09.775422   66588 ssh_runner.go:195] Run: crio config
	I1212 21:30:09.844479   66588 cni.go:84] Creating CNI manager for ""
	I1212 21:30:09.844500   66588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:30:09.844517   66588 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1212 21:30:09.844537   66588 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-422706 NodeName:newest-cni-422706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:30:09.844664   66588 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-422706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:30:09.844734   66588 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-422706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:30:09.844784   66588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:30:09.854927   66588 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:30:09.855007   66588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:30:09.864603   66588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1212 21:30:09.882581   66588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:30:09.900381   66588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1212 21:30:09.919133   66588 ssh_runner.go:195] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:30:09.923084   66588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:30:09.936526   66588 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706 for IP: 192.168.39.163
	I1212 21:30:09.936569   66588 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:09.936736   66588 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:30:09.936798   66588 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:30:09.936875   66588 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/client.key
	I1212 21:30:09.936898   66588 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key.a64e5ae8
	I1212 21:30:09.936910   66588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt.a64e5ae8 with IP's: [192.168.39.163 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 21:30:10.080326   66588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt.a64e5ae8 ...
	I1212 21:30:10.080360   66588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt.a64e5ae8: {Name:mk74321368a8748b4f73afc2d0c7473dd1231ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:10.080543   66588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key.a64e5ae8 ...
	I1212 21:30:10.080560   66588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key.a64e5ae8: {Name:mk10f7bfec39805625e83ee9d2561803f8698db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:10.080653   66588 certs.go:337] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt.a64e5ae8 -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt
	I1212 21:30:10.080731   66588 certs.go:341] copying /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key.a64e5ae8 -> /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key
	I1212 21:30:10.080822   66588 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.key
	I1212 21:30:10.080841   66588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.crt with IP's: []
	I1212 21:30:10.218684   66588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.crt ...
	I1212 21:30:10.218720   66588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.crt: {Name:mk627546e273e6859ef8f6eb4d3702e592c1c5b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:10.228643   66588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.key ...
	I1212 21:30:10.228679   66588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.key: {Name:mk272d9c2d06a1a255a4b11f693326f1a53cfdc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:30:10.228930   66588 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:30:10.228978   66588 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:30:10.228993   66588 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:30:10.229039   66588 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:30:10.229085   66588 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:30:10.229127   66588 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:30:10.229224   66588 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:30:10.230058   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:30:10.274322   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:30:10.300395   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:30:10.329022   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/newest-cni-422706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:30:10.357575   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:30:10.383643   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:30:10.410704   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:30:10.438220   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:30:10.465680   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:30:10.492832   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:30:10.520195   66588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:30:10.547517   66588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:30:10.565171   66588 ssh_runner.go:195] Run: openssl version
	I1212 21:30:10.571670   66588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:30:10.583257   66588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:30:10.589054   66588 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:30:10.589134   66588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:30:10.595801   66588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:30:10.607058   66588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:30:10.618686   66588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:10.624336   66588 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:10.624400   66588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:30:10.631279   66588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:30:10.643231   66588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:30:10.654744   66588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:30:10.661067   66588 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:30:10.661139   66588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:30:10.667685   66588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:30:10.678632   66588 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:30:10.683740   66588 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 21:30:10.683817   66588 kubeadm.go:404] StartCluster: {Name:newest-cni-422706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-422706 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:30:10.683931   66588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:30:10.683998   66588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:30:10.728497   66588 cri.go:89] found id: ""
	I1212 21:30:10.728560   66588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:30:10.740505   66588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:30:10.750508   66588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:30:10.760246   66588 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:30:10.760339   66588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 21:30:11.240329   66588 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:10:27 UTC, ends at Tue 2023-12-12 21:30:15 UTC. --
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.939047699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416614939031308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=cc96d603-5638-4431-a2e8-5f88db142f17 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.939871733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46aaf417-fa2f-4266-86fd-104a611af4ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.939969029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46aaf417-fa2f-4266-86fd-104a611af4ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.940332018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46aaf417-fa2f-4266-86fd-104a611af4ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.995344836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5a4ff4c7-beb9-440b-a392-52ebe7212dd9 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.995432545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5a4ff4c7-beb9-440b-a392-52ebe7212dd9 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.997029171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=050754ed-ef71-40e3-801b-bc53776c7446 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.997411615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416614997398368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=050754ed-ef71-40e3-801b-bc53776c7446 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.998056692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a3da3e2-32b9-4c61-afef-fe2df22b1b82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.998182518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a3da3e2-32b9-4c61-afef-fe2df22b1b82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:14 no-preload-343495 crio[714]: time="2023-12-12 21:30:14.998369768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a3da3e2-32b9-4c61-afef-fe2df22b1b82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.041644493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a342f2e1-0e03-42ce-8636-d73da06cc460 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.041731578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a342f2e1-0e03-42ce-8636-d73da06cc460 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.042886236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6ef8d17a-b4f6-4bf5-b5f2-620589ce88e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.043500874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416615043466487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=6ef8d17a-b4f6-4bf5-b5f2-620589ce88e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.044172894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=80278428-48e7-4990-b4e1-980be3486ef9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.044222686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=80278428-48e7-4990-b4e1-980be3486ef9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.044416049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=80278428-48e7-4990-b4e1-980be3486ef9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.083915729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b00f9815-1aba-4adf-b432-cd6bca7847d6 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.084030143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b00f9815-1aba-4adf-b432-cd6bca7847d6 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.087510789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d1fb4d5a-29f2-4b18-9dd8-33cae178a458 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.087964024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416615087942826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d1fb4d5a-29f2-4b18-9dd8-33cae178a458 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.088714994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d663a94c-9964-4696-9644-43228a1101f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.088812549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d663a94c-9964-4696-9644-43228a1101f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:30:15 no-preload-343495 crio[714]: time="2023-12-12 21:30:15.089019211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4,PodSandboxId:ba06521563fe8fe48f51a05c09e69291c7fe641610cda4b8408ac379ba4346a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702415771729950801,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba6a30c-79ab-43e4-92fe-7c11a6046571,},Annotations:map[string]string{io.kubernetes.container.hash: 56f4f644,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063,PodSandboxId:7aeada5e5720734d0b0adfa7d0dcd5951b8c46a4f9d1834bf2fb22e1752525a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702415771506726878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glrvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b708fd-e950-4fe9-adbc-dece2985edd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e3ef32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3,PodSandboxId:efbd172f52958171393f456bfc37da964c9fca45252af0193c59c648de25b279,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702415770973224532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-466sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a22351-0561-4345-8997-ce6b7ab438f7,},Annotations:map[string]string{io.kubernetes.container.hash: 609e9f38,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd,PodSandboxId:b2df9f9cb1384749968bbe8799ae669ed7e24327800d39fdb873baea238ae880,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702415749033707851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4caa15d98c74fbec43f951bd7ab2518b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82edea7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a,PodSandboxId:4a0d87886d52aadc2ec17855c0e151883f2fab2c843c0e46d3ab7a687d9b7292,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702415748896716442,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b80ddbd5607ff5f2fefa235705c2b44a,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06,PodSandboxId:a16294de5c2dd51a73fa935633d53c2262648dc8c6e7f85c4d49f2b941946aed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702415748650058363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2ceddd091bda0c281239edb090401ff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b016e094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b,PodSandboxId:24ad9fecd9244ff936f7769d1fdbf95776663ef6096e1ffcbe55d9b477484e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702415748528599715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-343495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73117d92df8ede1aee030df545572c,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d663a94c-9964-4696-9644-43228a1101f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	410771844ae4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   ba06521563fe8       storage-provisioner
	dce7bff2c61d6       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   7aeada5e57207       kube-proxy-glrvd
	be26409f1ba8e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   efbd172f52958       coredns-76f75df574-466sr
	787a1144b71c5       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   b2df9f9cb1384       etcd-no-preload-343495
	d26db7fee6c9e       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   4a0d87886d52a       kube-scheduler-no-preload-343495
	f9708b2bba83f       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   a16294de5c2dd       kube-apiserver-no-preload-343495
	ae2f388693d39       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   24ad9fecd9244       kube-controller-manager-no-preload-343495
	
	
	==> coredns [be26409f1ba8e841a15f04927beabc2ed1a1c19129f6a6ac7c035c1d7b96a2f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60825 - 21233 "HINFO IN 1011411155478666539.2533239205206563428. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010454599s
	
	
	==> describe nodes <==
	Name:               no-preload-343495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-343495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=no-preload-343495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-343495
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 21:30:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:26:28 +0000   Tue, 12 Dec 2023 21:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:26:28 +0000   Tue, 12 Dec 2023 21:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:26:28 +0000   Tue, 12 Dec 2023 21:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:26:28 +0000   Tue, 12 Dec 2023 21:15:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.176
	  Hostname:    no-preload-343495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9916e37b2280452399561c1888073016
	  System UUID:                9916e37b-2280-4523-9956-1c1888073016
	  Boot ID:                    78a30efc-5e15-4263-ba93-714a7384fb57
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-466sr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-343495                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-343495             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-343495    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-glrvd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-343495             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-xc79n              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-343495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-343495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-343495 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-343495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-343495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-343495 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node no-preload-343495 status is now: NodeNotReady
	  Normal  NodeReady                14m                kubelet          Node no-preload-343495 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-343495 event: Registered Node no-preload-343495 in Controller
	
	
	==> dmesg <==
	[Dec12 21:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070530] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.126680] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.511665] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142408] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.557951] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.252734] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.117782] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.149017] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.103233] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.225197] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Dec12 21:11] systemd-fstab-generator[1330]: Ignoring "noauto" for root device
	[ +20.711752] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 21:15] systemd-fstab-generator[3956]: Ignoring "noauto" for root device
	[  +9.846870] systemd-fstab-generator[4281]: Ignoring "noauto" for root device
	[Dec12 21:16] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [787a1144b71c550b7aaef03feddc00eecae3314d86298b5bb1fb323b394d8acd] <==
	{"level":"info","ts":"2023-12-12T21:15:51.110688Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T21:15:51.1108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.176:2380"}
	{"level":"info","ts":"2023-12-12T21:15:51.714196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T21:15:51.714275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T21:15:51.714318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a received MsgPreVoteResp from 4f4f572eb29375a at term 1"}
	{"level":"info","ts":"2023-12-12T21:15:51.714335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.714343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a received MsgVoteResp from 4f4f572eb29375a at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.714353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became leader at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.714363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4f4f572eb29375a elected leader 4f4f572eb29375a at term 2"}
	{"level":"info","ts":"2023-12-12T21:15:51.715936Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.7171Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4f4f572eb29375a","local-member-attributes":"{Name:no-preload-343495 ClientURLs:[https://192.168.61.176:2379]}","request-path":"/0/members/4f4f572eb29375a/attributes","cluster-id":"310df9cc729b3e75","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T21:15:51.717223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T21:15:51.717765Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"310df9cc729b3e75","local-member-id":"4f4f572eb29375a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.717856Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.717889Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T21:15:51.7179Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T21:15:51.718932Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T21:15:51.718991Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T21:15:51.719635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.176:2379"}
	{"level":"info","ts":"2023-12-12T21:15:51.720606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T21:25:51.75791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":713}
	{"level":"info","ts":"2023-12-12T21:25:51.760902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":713,"took":"2.457046ms","hash":4132763611}
	{"level":"info","ts":"2023-12-12T21:25:51.760998Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4132763611,"revision":713,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T21:30:11.487361Z","caller":"traceutil/trace.go:171","msg":"trace[1924990668] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"338.211296ms","start":"2023-12-12T21:30:11.149068Z","end":"2023-12-12T21:30:11.487279Z","steps":["trace[1924990668] 'process raft request'  (duration: 337.936423ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T21:30:11.490907Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T21:30:11.149038Z","time spent":"338.78099ms","remote":"127.0.0.1:42048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1166 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 21:30:15 up 19 min,  0 users,  load average: 0.17, 0.24, 0.19
	Linux no-preload-343495 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f9708b2bba83f2d1f8f58192ed20b7469b8811778aecfe7ac47e1bec503b8e06] <==
	I1212 21:23:54.262245       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:25:53.262595       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:25:53.262894       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1212 21:25:54.264035       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:25:54.264207       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:25:54.264278       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:25:54.264209       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:25:54.264458       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:25:54.265386       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:26:54.265050       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:26:54.265203       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:26:54.265219       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:26:54.266518       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:26:54.266633       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:26:54.266668       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:28:54.265932       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:28:54.266014       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 21:28:54.266033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 21:28:54.267083       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 21:28:54.267250       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:28:54.267263       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae2f388693d3971f166de6ec721464b044b6347176ef5db8c7f848f8b01e299b] <==
	I1212 21:24:39.055640       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:25:08.664478       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:25:09.064342       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:25:38.670361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:25:39.073297       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:26:08.676780       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:26:09.081352       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:26:38.682382       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:26:39.091909       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:27:08.687864       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:27:09.100999       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 21:27:09.795305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="352.756µs"
	I1212 21:27:23.806820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="214.347µs"
	E1212 21:27:38.694523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:27:39.110770       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:28:08.703234       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:28:09.119366       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:28:38.709557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:28:39.128387       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:29:08.715210       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:29:09.136832       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:29:38.721669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:29:39.146980       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 21:30:08.727855       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 21:30:09.167390       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [dce7bff2c61d6e1eedd859343097cc89cc0662f64e38ca5ba4f749b51260f063] <==
	I1212 21:16:11.743894       1 server_others.go:72] "Using iptables proxy"
	I1212 21:16:11.760586       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.176"]
	I1212 21:16:11.872304       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 21:16:11.872373       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 21:16:11.872393       1 server_others.go:168] "Using iptables Proxier"
	I1212 21:16:11.875543       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 21:16:11.875752       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1212 21:16:11.875792       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:16:11.877427       1 config.go:188] "Starting service config controller"
	I1212 21:16:11.877474       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 21:16:11.877500       1 config.go:97] "Starting endpoint slice config controller"
	I1212 21:16:11.877504       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 21:16:11.880193       1 config.go:315] "Starting node config controller"
	I1212 21:16:11.880230       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 21:16:11.978511       1 shared_informer.go:318] Caches are synced for service config
	I1212 21:16:11.978793       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 21:16:11.981808       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d26db7fee6c9ee68305a95061fd2281d54a75d10dd2d3765b369f4bedbb1eb1a] <==
	E1212 21:15:53.273812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 21:15:53.273849       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:15:53.273889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 21:15:53.273945       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:53.273954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 21:15:53.274018       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:53.274054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:53.275603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.109014       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 21:15:54.109072       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 21:15:54.233971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:54.234036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.351940       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:15:54.352001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 21:15:54.427981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:54.428041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.450002       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:15:54.450078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 21:15:54.588980       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:15:54.589036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 21:15:54.645235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:54.645305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 21:15:54.647723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:54.647794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1212 21:15:57.064415       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:10:27 UTC, ends at Tue 2023-12-12 21:30:15 UTC. --
	Dec 12 21:27:36 no-preload-343495 kubelet[4288]: E1212 21:27:36.776721    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:27:50 no-preload-343495 kubelet[4288]: E1212 21:27:50.776981    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:27:56 no-preload-343495 kubelet[4288]: E1212 21:27:56.891220    4288 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:27:56 no-preload-343495 kubelet[4288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:27:56 no-preload-343495 kubelet[4288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:27:56 no-preload-343495 kubelet[4288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:28:04 no-preload-343495 kubelet[4288]: E1212 21:28:04.776863    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:28:15 no-preload-343495 kubelet[4288]: E1212 21:28:15.776720    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:28:30 no-preload-343495 kubelet[4288]: E1212 21:28:30.777036    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:28:41 no-preload-343495 kubelet[4288]: E1212 21:28:41.776484    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:28:53 no-preload-343495 kubelet[4288]: E1212 21:28:53.775560    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:28:56 no-preload-343495 kubelet[4288]: E1212 21:28:56.893095    4288 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:28:56 no-preload-343495 kubelet[4288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:28:56 no-preload-343495 kubelet[4288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:28:56 no-preload-343495 kubelet[4288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:29:06 no-preload-343495 kubelet[4288]: E1212 21:29:06.777344    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:29:19 no-preload-343495 kubelet[4288]: E1212 21:29:19.775665    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:29:33 no-preload-343495 kubelet[4288]: E1212 21:29:33.781769    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:29:48 no-preload-343495 kubelet[4288]: E1212 21:29:48.778799    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:29:56 no-preload-343495 kubelet[4288]: E1212 21:29:56.890517    4288 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 21:29:56 no-preload-343495 kubelet[4288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 21:29:56 no-preload-343495 kubelet[4288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 21:29:56 no-preload-343495 kubelet[4288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 21:30:01 no-preload-343495 kubelet[4288]: E1212 21:30:01.775939    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	Dec 12 21:30:12 no-preload-343495 kubelet[4288]: E1212 21:30:12.776342    4288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xc79n" podUID="fda5e773-f1a9-4f99-a0e0-06d67d5f1705"
	
	
	==> storage-provisioner [410771844ae4b28b9c9bda51f625a1dbe6a00f7e9456655181b9474e98ab1ae4] <==
	I1212 21:16:11.941747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:16:11.993789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:16:11.993927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:16:12.014634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:16:12.015719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a32f3864-f015-4e37-be30-850cb267aa84", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-343495_4ec44916-4937-426c-a8cb-8e309ece4040 became leader
	I1212 21:16:12.015985       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-343495_4ec44916-4937-426c-a8cb-8e309ece4040!
	I1212 21:16:12.116319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-343495_4ec44916-4937-426c-a8cb-8e309ece4040!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-343495 -n no-preload-343495
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-343495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xc79n
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-343495 describe pod metrics-server-57f55c9bc5-xc79n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-343495 describe pod metrics-server-57f55c9bc5-xc79n: exit status 1 (71.974668ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xc79n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-343495 describe pod metrics-server-57f55c9bc5-xc79n: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (300.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (229.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 21:26:06.483198   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:26:45.698009   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:26:48.881138   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:27:22.809827   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:28:12.359042   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:28:45.801453   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:28:56.433243   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-372099 -n old-k8s-version-372099
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 21:29:32.966665624 +0000 UTC m=+5574.186838193
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-372099 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-372099 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.305µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-372099 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-372099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-372099 logs -n 25: (1.784636083s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-690675 sudo cat                              | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo                                  | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo find                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-690675 sudo crio                             | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-690675                                       | bridge-690675                | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741087 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | disable-driver-mounts-741087                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:03 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-343495             | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC | 12 Dec 23 21:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831188            | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-372099        | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC | 12 Dec 23 21:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-171828  | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC | 12 Dec 23 21:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:03 UTC |                     |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-343495                  | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-343495                                   | no-preload-343495            | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831188                 | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831188                                  | embed-certs-831188           | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-372099             | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-372099                              | old-k8s-version-372099       | jenkins | v1.32.0 | 12 Dec 23 21:04 UTC | 12 Dec 23 21:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-171828       | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-171828 | jenkins | v1.32.0 | 12 Dec 23 21:06 UTC | 12 Dec 23 21:15 UTC |
	|         | default-k8s-diff-port-171828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 21:06:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:06:02.112042   61298 out.go:296] Setting OutFile to fd 1 ...
	I1212 21:06:02.112158   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112166   61298 out.go:309] Setting ErrFile to fd 2...
	I1212 21:06:02.112171   61298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 21:06:02.112352   61298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 21:06:02.112888   61298 out.go:303] Setting JSON to false
	I1212 21:06:02.113799   61298 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6516,"bootTime":1702408646,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 21:06:02.113858   61298 start.go:138] virtualization: kvm guest
	I1212 21:06:02.116152   61298 out.go:177] * [default-k8s-diff-port-171828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 21:06:02.118325   61298 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 21:06:02.118373   61298 notify.go:220] Checking for updates...
	I1212 21:06:02.120036   61298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:06:02.121697   61298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:06:02.123350   61298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 21:06:02.124958   61298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 21:06:02.126355   61298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:06:02.128221   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:06:02.128652   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.128709   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.143368   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I1212 21:06:02.143740   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.144319   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.144342   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.144674   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.144877   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.145143   61298 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 21:06:02.145473   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:06:02.145519   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:06:02.160165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1212 21:06:02.160611   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:06:02.161098   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:06:02.161129   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:06:02.161410   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:06:02.161605   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:06:02.198703   61298 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 21:06:02.199992   61298 start.go:298] selected driver: kvm2
	I1212 21:06:02.200011   61298 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.200131   61298 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:06:02.200848   61298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.200920   61298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 21:06:02.215947   61298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 21:06:02.216333   61298 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:02.216397   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:06:02.216410   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:06:02.216420   61298 start_flags.go:323] config:
	{Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-17182
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:06:02.216597   61298 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:02.218773   61298 out.go:177] * Starting control plane node default-k8s-diff-port-171828 in cluster default-k8s-diff-port-171828
	I1212 21:05:59.427580   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:02.220182   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:06:02.220241   61298 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 21:06:02.220256   61298 cache.go:56] Caching tarball of preloaded images
	I1212 21:06:02.220379   61298 preload.go:174] Found /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 21:06:02.220393   61298 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 21:06:02.220514   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:06:02.220739   61298 start.go:365] acquiring machines lock for default-k8s-diff-port-171828: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:06:05.507538   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:08.579605   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:14.659535   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:17.731542   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:23.811575   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:26.883541   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:32.963600   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:36.035521   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:42.115475   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:45.187562   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:51.267528   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:06:54.339532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:00.419548   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:03.491553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:09.571514   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:12.643531   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:18.723534   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:21.795549   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:27.875554   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:30.947574   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:37.027523   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:40.099490   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:46.179518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:49.251577   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:55.331532   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:07:58.403520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:04.483547   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:07.555546   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:13.635553   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:16.707518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:22.787551   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:25.859539   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:31.939511   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:35.011564   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:41.091518   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:44.163443   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:50.243526   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:53.315520   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:08:59.395550   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:02.467533   60628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.176:22: connect: no route to host
	I1212 21:09:05.471384   60833 start.go:369] acquired machines lock for "embed-certs-831188" in 4m18.011296189s
	I1212 21:09:05.471446   60833 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:05.471453   60833 fix.go:54] fixHost starting: 
	I1212 21:09:05.471803   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:05.471837   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:05.486451   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1212 21:09:05.486900   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:05.487381   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:05.487404   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:05.487715   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:05.487879   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:05.488020   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:05.489670   60833 fix.go:102] recreateIfNeeded on embed-certs-831188: state=Stopped err=<nil>
	I1212 21:09:05.489704   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	W1212 21:09:05.489876   60833 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:05.492059   60833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831188" ...
	I1212 21:09:05.493752   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Start
	I1212 21:09:05.493959   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring networks are active...
	I1212 21:09:05.494984   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network default is active
	I1212 21:09:05.495423   60833 main.go:141] libmachine: (embed-certs-831188) Ensuring network mk-embed-certs-831188 is active
	I1212 21:09:05.495761   60833 main.go:141] libmachine: (embed-certs-831188) Getting domain xml...
	I1212 21:09:05.496421   60833 main.go:141] libmachine: (embed-certs-831188) Creating domain...
	I1212 21:09:06.732388   60833 main.go:141] libmachine: (embed-certs-831188) Waiting to get IP...
	I1212 21:09:06.733338   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:06.733708   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:06.733785   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:06.733676   61768 retry.go:31] will retry after 284.906493ms: waiting for machine to come up
	I1212 21:09:07.020284   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.020718   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.020745   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.020671   61768 retry.go:31] will retry after 293.274895ms: waiting for machine to come up
	I1212 21:09:07.315313   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.315686   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.315712   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.315641   61768 retry.go:31] will retry after 361.328832ms: waiting for machine to come up
	I1212 21:09:05.469256   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:05.469293   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:09:05.471233   60628 machine.go:91] provisioned docker machine in 4m37.408714984s
	I1212 21:09:05.471294   60628 fix.go:56] fixHost completed within 4m37.431179626s
	I1212 21:09:05.471299   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 4m37.431203273s
	W1212 21:09:05.471318   60628 start.go:694] error starting host: provision: host is not running
	W1212 21:09:05.471416   60628 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 21:09:05.471424   60628 start.go:709] Will try again in 5 seconds ...
	I1212 21:09:07.678255   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:07.678636   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:07.678700   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:07.678599   61768 retry.go:31] will retry after 604.479659ms: waiting for machine to come up
	I1212 21:09:08.284350   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:08.284754   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:08.284779   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:08.284701   61768 retry.go:31] will retry after 731.323448ms: waiting for machine to come up
	I1212 21:09:09.017564   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.018007   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.018040   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.017968   61768 retry.go:31] will retry after 734.083609ms: waiting for machine to come up
	I1212 21:09:09.753947   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:09.754423   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:09.754446   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:09.754362   61768 retry.go:31] will retry after 786.816799ms: waiting for machine to come up
	I1212 21:09:10.542771   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:10.543304   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:10.543341   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:10.543264   61768 retry.go:31] will retry after 1.40646031s: waiting for machine to come up
	I1212 21:09:11.951821   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:11.952180   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:11.952223   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:11.952135   61768 retry.go:31] will retry after 1.693488962s: waiting for machine to come up
	I1212 21:09:10.473087   60628 start.go:365] acquiring machines lock for no-preload-343495: {Name:mkcb5108e7c2f79abc707be5209953eb9da754f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 21:09:13.646801   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:13.647256   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:13.647299   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:13.647180   61768 retry.go:31] will retry after 1.856056162s: waiting for machine to come up
	I1212 21:09:15.504815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:15.505228   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:15.505258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:15.505175   61768 retry.go:31] will retry after 2.008264333s: waiting for machine to come up
	I1212 21:09:17.516231   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:17.516653   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:17.516683   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:17.516604   61768 retry.go:31] will retry after 3.239343078s: waiting for machine to come up
	I1212 21:09:20.757258   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:20.757696   60833 main.go:141] libmachine: (embed-certs-831188) DBG | unable to find current IP address of domain embed-certs-831188 in network mk-embed-certs-831188
	I1212 21:09:20.757725   60833 main.go:141] libmachine: (embed-certs-831188) DBG | I1212 21:09:20.757654   61768 retry.go:31] will retry after 4.315081016s: waiting for machine to come up
	I1212 21:09:26.424166   60948 start.go:369] acquired machines lock for "old-k8s-version-372099" in 4m29.049387398s
	I1212 21:09:26.424241   60948 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:26.424254   60948 fix.go:54] fixHost starting: 
	I1212 21:09:26.424715   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:26.424763   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:26.444634   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I1212 21:09:26.445043   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:26.445520   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:09:26.445538   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:26.445863   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:26.446052   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:26.446192   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:09:26.447776   60948 fix.go:102] recreateIfNeeded on old-k8s-version-372099: state=Stopped err=<nil>
	I1212 21:09:26.447804   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	W1212 21:09:26.448015   60948 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:26.450126   60948 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-372099" ...
	I1212 21:09:26.451553   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Start
	I1212 21:09:26.451708   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring networks are active...
	I1212 21:09:26.452388   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network default is active
	I1212 21:09:26.452655   60948 main.go:141] libmachine: (old-k8s-version-372099) Ensuring network mk-old-k8s-version-372099 is active
	I1212 21:09:26.453124   60948 main.go:141] libmachine: (old-k8s-version-372099) Getting domain xml...
	I1212 21:09:26.453799   60948 main.go:141] libmachine: (old-k8s-version-372099) Creating domain...
	I1212 21:09:25.078112   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078553   60833 main.go:141] libmachine: (embed-certs-831188) Found IP for machine: 192.168.50.163
	I1212 21:09:25.078585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has current primary IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.078596   60833 main.go:141] libmachine: (embed-certs-831188) Reserving static IP address...
	I1212 21:09:25.078997   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.079030   60833 main.go:141] libmachine: (embed-certs-831188) Reserved static IP address: 192.168.50.163
	I1212 21:09:25.079052   60833 main.go:141] libmachine: (embed-certs-831188) DBG | skip adding static IP to network mk-embed-certs-831188 - found existing host DHCP lease matching {name: "embed-certs-831188", mac: "52:54:00:58:50:cf", ip: "192.168.50.163"}
	I1212 21:09:25.079071   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Getting to WaitForSSH function...
	I1212 21:09:25.079085   60833 main.go:141] libmachine: (embed-certs-831188) Waiting for SSH to be available...
	I1212 21:09:25.080901   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081194   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.081242   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.081366   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH client type: external
	I1212 21:09:25.081388   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa (-rw-------)
	I1212 21:09:25.081416   60833 main.go:141] libmachine: (embed-certs-831188) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:25.081426   60833 main.go:141] libmachine: (embed-certs-831188) DBG | About to run SSH command:
	I1212 21:09:25.081438   60833 main.go:141] libmachine: (embed-certs-831188) DBG | exit 0
	I1212 21:09:25.171277   60833 main.go:141] libmachine: (embed-certs-831188) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:25.171663   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetConfigRaw
	I1212 21:09:25.172345   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.174944   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175302   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.175333   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.175553   60833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/config.json ...
	I1212 21:09:25.175828   60833 machine.go:88] provisioning docker machine ...
	I1212 21:09:25.175855   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:25.176065   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176212   60833 buildroot.go:166] provisioning hostname "embed-certs-831188"
	I1212 21:09:25.176233   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.176371   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.178556   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178823   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.178850   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.178957   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.179142   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179295   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.179436   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.179558   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.179895   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.179910   60833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831188 && echo "embed-certs-831188" | sudo tee /etc/hostname
	I1212 21:09:25.312418   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831188
	
	I1212 21:09:25.312457   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.315156   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315529   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.315570   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.315707   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.315895   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316053   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.316211   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.316378   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.316840   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.316869   60833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831188' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831188/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831188' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:25.448302   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:25.448332   60833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:25.448353   60833 buildroot.go:174] setting up certificates
	I1212 21:09:25.448362   60833 provision.go:83] configureAuth start
	I1212 21:09:25.448369   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetMachineName
	I1212 21:09:25.448691   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:25.451262   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.451639   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.451807   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.454144   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454434   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.454460   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.454596   60833 provision.go:138] copyHostCerts
	I1212 21:09:25.454665   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:25.454689   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:25.454775   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:25.454928   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:25.454940   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:25.454984   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:25.455062   60833 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:25.455073   60833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:25.455106   60833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:25.455171   60833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831188 san=[192.168.50.163 192.168.50.163 localhost 127.0.0.1 minikube embed-certs-831188]
	I1212 21:09:25.678855   60833 provision.go:172] copyRemoteCerts
	I1212 21:09:25.678942   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:25.678975   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.681866   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682221   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.682249   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.682399   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.682590   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.682730   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.682856   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:25.773454   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:25.796334   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 21:09:25.818680   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:09:25.840234   60833 provision.go:86] duration metric: configureAuth took 391.845214ms
	I1212 21:09:25.840268   60833 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:25.840497   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:25.840643   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:25.842988   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843431   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:25.843482   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:25.843586   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:25.843772   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.843946   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:25.844066   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:25.844227   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:25.844542   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:25.844563   60833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:26.167363   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:26.167388   60833 machine.go:91] provisioned docker machine in 991.541719ms
	I1212 21:09:26.167398   60833 start.go:300] post-start starting for "embed-certs-831188" (driver="kvm2")
	I1212 21:09:26.167408   60833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:26.167444   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.167739   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:26.167763   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.170188   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170569   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.170611   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.170712   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.170880   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.171049   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.171194   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.261249   60833 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:26.265429   60833 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:26.265451   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:26.265522   60833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:26.265602   60833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:26.265695   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:26.274054   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:26.297890   60833 start.go:303] post-start completed in 130.478946ms
	I1212 21:09:26.297915   60833 fix.go:56] fixHost completed within 20.826462284s
	I1212 21:09:26.297934   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.300585   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.300934   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.300975   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.301144   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.301359   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301529   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.301665   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.301797   60833 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:26.302153   60833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I1212 21:09:26.302164   60833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:26.423978   60833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415366.370228005
	
	I1212 21:09:26.424008   60833 fix.go:206] guest clock: 1702415366.370228005
	I1212 21:09:26.424019   60833 fix.go:219] Guest: 2023-12-12 21:09:26.370228005 +0000 UTC Remote: 2023-12-12 21:09:26.297918475 +0000 UTC m=+278.991313322 (delta=72.30953ms)
	I1212 21:09:26.424052   60833 fix.go:190] guest clock delta is within tolerance: 72.30953ms
	I1212 21:09:26.424061   60833 start.go:83] releasing machines lock for "embed-certs-831188", held for 20.952636536s
	I1212 21:09:26.424090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.424347   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:26.427068   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427479   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.427519   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.427592   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428173   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428344   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:26.428414   60833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:26.428470   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.428492   60833 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:26.428508   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:26.430943   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431251   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431371   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431393   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431548   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431631   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:26.431654   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:26.431776   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.431844   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:26.431998   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:26.432040   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:26.432285   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.432490   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:26.548980   60833 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:26.555211   60833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:26.707171   60833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:26.714564   60833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:26.714658   60833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:26.730858   60833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:26.730890   60833 start.go:475] detecting cgroup driver to use...
	I1212 21:09:26.730963   60833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:26.751316   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:26.766700   60833 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:26.766767   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:26.783157   60833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:26.799559   60833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:26.908659   60833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:27.029185   60833 docker.go:219] disabling docker service ...
	I1212 21:09:27.029245   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:27.042969   60833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:27.055477   60833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:27.174297   60833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:27.285338   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:27.299676   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:27.317832   60833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:09:27.317900   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.329270   60833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:27.329346   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.341201   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.353243   60833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:27.365796   60833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:27.377700   60833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:27.388796   60833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:27.388858   60833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:27.401983   60833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:27.411527   60833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:27.523326   60833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:27.702370   60833 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:27.702435   60833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:27.707537   60833 start.go:543] Will wait 60s for crictl version
	I1212 21:09:27.707619   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:09:27.711502   60833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:27.750808   60833 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:27.750912   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.799419   60833 ssh_runner.go:195] Run: crio --version
	I1212 21:09:27.848900   60833 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:09:27.722142   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting to get IP...
	I1212 21:09:27.723300   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.723736   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.723806   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.723702   61894 retry.go:31] will retry after 267.755874ms: waiting for machine to come up
	I1212 21:09:27.993406   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:27.993917   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:27.993947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:27.993865   61894 retry.go:31] will retry after 314.872831ms: waiting for machine to come up
	I1212 21:09:28.310446   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.311022   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.311051   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.310971   61894 retry.go:31] will retry after 435.368111ms: waiting for machine to come up
	I1212 21:09:28.747774   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:28.748267   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:28.748299   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:28.748238   61894 retry.go:31] will retry after 521.305154ms: waiting for machine to come up
	I1212 21:09:29.270989   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.271519   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.271553   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.271446   61894 retry.go:31] will retry after 482.42376ms: waiting for machine to come up
	I1212 21:09:29.755222   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:29.755724   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:29.755755   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:29.755671   61894 retry.go:31] will retry after 676.918794ms: waiting for machine to come up
	I1212 21:09:30.434488   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:30.435072   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:30.435103   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:30.435025   61894 retry.go:31] will retry after 876.618903ms: waiting for machine to come up
	I1212 21:09:31.313270   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:31.313826   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:31.313857   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:31.313775   61894 retry.go:31] will retry after 1.03353638s: waiting for machine to come up
	I1212 21:09:27.850614   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetIP
	I1212 21:09:27.853633   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854033   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:27.854069   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:27.854243   60833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:27.858626   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:27.871999   60833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:09:27.872058   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:27.920758   60833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:09:27.920832   60833 ssh_runner.go:195] Run: which lz4
	I1212 21:09:27.924857   60833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:27.929186   60833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:27.929220   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:09:29.834194   60833 crio.go:444] Took 1.909381 seconds to copy over tarball
	I1212 21:09:29.834285   60833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:32.348562   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:32.349019   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:32.349041   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:32.348978   61894 retry.go:31] will retry after 1.80085882s: waiting for machine to come up
	I1212 21:09:34.151943   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:34.152375   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:34.152416   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:34.152343   61894 retry.go:31] will retry after 2.08304575s: waiting for machine to come up
	I1212 21:09:36.238682   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:36.239115   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:36.239149   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:36.239074   61894 retry.go:31] will retry after 2.109809124s: waiting for machine to come up
	I1212 21:09:33.005355   60833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.171034001s)
	I1212 21:09:33.005386   60833 crio.go:451] Took 3.171167 seconds to extract the tarball
	I1212 21:09:33.005398   60833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:33.046773   60833 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:33.101606   60833 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:09:33.101627   60833 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:09:33.101689   60833 ssh_runner.go:195] Run: crio config
	I1212 21:09:33.162553   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:33.162584   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:33.162608   60833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:33.162637   60833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831188 NodeName:embed-certs-831188 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:09:33.162806   60833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831188"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:33.162923   60833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-831188 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:33.162978   60833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:09:33.171937   60833 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:33.172013   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:33.180480   60833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 21:09:33.197675   60833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:33.214560   60833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 21:09:33.234926   60833 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:33.238913   60833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:33.255261   60833 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188 for IP: 192.168.50.163
	I1212 21:09:33.255320   60833 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:33.255462   60833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:33.255496   60833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:33.255561   60833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/client.key
	I1212 21:09:33.255641   60833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key.6a576ed8
	I1212 21:09:33.255686   60833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key
	I1212 21:09:33.255781   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:33.255807   60833 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:33.255814   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:33.255835   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:33.255864   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:33.255885   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:33.255931   60833 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:33.256505   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:33.282336   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:33.307179   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:33.332468   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/embed-certs-831188/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:09:33.357444   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:33.383372   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:33.409070   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:33.438164   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:33.467676   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:33.496645   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:33.523126   60833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:33.548366   60833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:33.567745   60833 ssh_runner.go:195] Run: openssl version
	I1212 21:09:33.573716   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:33.584221   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589689   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.589767   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:33.595880   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:33.609574   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:33.623129   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629541   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.629615   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:33.635862   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:33.646421   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:33.656686   60833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661397   60833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.661473   60833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:33.667092   60833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:33.677905   60833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:33.682795   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:33.689346   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:33.695822   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:33.702368   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:33.708500   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:33.714793   60833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:33.721121   60833 kubeadm.go:404] StartCluster: {Name:embed-certs-831188 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-831188 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:33.721252   60833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:33.721319   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:33.759428   60833 cri.go:89] found id: ""
	I1212 21:09:33.759502   60833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:33.769592   60833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:33.769617   60833 kubeadm.go:636] restartCluster start
	I1212 21:09:33.769712   60833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:33.779313   60833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.780838   60833 kubeconfig.go:92] found "embed-certs-831188" server: "https://192.168.50.163:8443"
	I1212 21:09:33.784096   60833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:33.793192   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.793314   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.805112   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:33.805139   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:33.805196   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:33.816975   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.317757   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.317858   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.329702   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:34.817167   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:34.817266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:34.828633   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.317136   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.317230   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.328803   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:35.818032   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:35.818121   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:35.829428   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.318141   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.318253   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.330749   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:36.817284   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:36.817367   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:36.828787   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:37.317183   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.317266   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.334557   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.350131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:38.350522   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:38.350546   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:38.350484   61894 retry.go:31] will retry after 2.423656351s: waiting for machine to come up
	I1212 21:09:40.777036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:40.777455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:40.777489   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:40.777399   61894 retry.go:31] will retry after 3.275180742s: waiting for machine to come up
	I1212 21:09:37.817090   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:37.817219   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:37.833813   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.317328   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.317409   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.334684   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:38.817255   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:38.817353   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:38.831011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.317555   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.317648   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.330189   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:39.817759   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:39.817866   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:39.830611   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.317127   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.317198   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.329508   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:40.817580   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:40.817677   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:40.829289   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.317853   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.317928   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.331394   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:41.818013   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:41.818098   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:41.829011   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:42.317526   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.317610   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.329211   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:44.056058   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:44.056558   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | unable to find current IP address of domain old-k8s-version-372099 in network mk-old-k8s-version-372099
	I1212 21:09:44.056587   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | I1212 21:09:44.056517   61894 retry.go:31] will retry after 4.729711581s: waiting for machine to come up
	I1212 21:09:42.818081   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:42.818166   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:42.829930   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.317420   60833 api_server.go:166] Checking apiserver status ...
	I1212 21:09:43.317526   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:43.328536   60833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:43.794084   60833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:09:43.794118   60833 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:09:43.794129   60833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:09:43.794192   60833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:43.842360   60833 cri.go:89] found id: ""
	I1212 21:09:43.842431   60833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:09:43.859189   60833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:09:43.869065   60833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:09:43.869135   60833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878614   60833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:09:43.878644   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.011533   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.544591   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.757944   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.850440   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:44.942874   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:09:44.942967   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:44.954886   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.466556   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:45.966545   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.465991   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.966021   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:09:46.987348   60833 api_server.go:72] duration metric: took 2.04447632s to wait for apiserver process to appear ...
	I1212 21:09:46.987374   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:09:46.987388   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.987890   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:46.987926   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:46.988389   60833 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I1212 21:09:50.008527   61298 start.go:369] acquired machines lock for "default-k8s-diff-port-171828" in 3m47.787737833s
	I1212 21:09:50.008595   61298 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:09:50.008607   61298 fix.go:54] fixHost starting: 
	I1212 21:09:50.008999   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:50.009035   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:50.025692   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I1212 21:09:50.026047   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:50.026541   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:09:50.026563   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:50.026945   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:50.027160   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:09:50.027344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:09:50.029005   61298 fix.go:102] recreateIfNeeded on default-k8s-diff-port-171828: state=Stopped err=<nil>
	I1212 21:09:50.029031   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	W1212 21:09:50.029193   61298 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:09:50.031805   61298 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-171828" ...
	I1212 21:09:48.789770   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790158   60948 main.go:141] libmachine: (old-k8s-version-372099) Found IP for machine: 192.168.39.202
	I1212 21:09:48.790172   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserving static IP address...
	I1212 21:09:48.790195   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.790655   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.790683   60948 main.go:141] libmachine: (old-k8s-version-372099) Reserved static IP address: 192.168.39.202
	I1212 21:09:48.790701   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | skip adding static IP to network mk-old-k8s-version-372099 - found existing host DHCP lease matching {name: "old-k8s-version-372099", mac: "52:54:00:d3:fa:ae", ip: "192.168.39.202"}
	I1212 21:09:48.790719   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Getting to WaitForSSH function...
	I1212 21:09:48.790736   60948 main.go:141] libmachine: (old-k8s-version-372099) Waiting for SSH to be available...
	I1212 21:09:48.793069   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793392   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.793418   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.793542   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH client type: external
	I1212 21:09:48.793582   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa (-rw-------)
	I1212 21:09:48.793610   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:09:48.793620   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | About to run SSH command:
	I1212 21:09:48.793629   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | exit 0
	I1212 21:09:48.883487   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | SSH cmd err, output: <nil>: 
	I1212 21:09:48.883885   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetConfigRaw
	I1212 21:09:48.884519   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:48.887128   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887455   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.887485   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.887734   60948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/config.json ...
	I1212 21:09:48.887918   60948 machine.go:88] provisioning docker machine ...
	I1212 21:09:48.887936   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:48.888097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888225   60948 buildroot.go:166] provisioning hostname "old-k8s-version-372099"
	I1212 21:09:48.888238   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:48.888378   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:48.890462   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890820   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:48.890847   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:48.890982   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:48.891139   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891289   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:48.891437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:48.891597   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:48.891940   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:48.891955   60948 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-372099 && echo "old-k8s-version-372099" | sudo tee /etc/hostname
	I1212 21:09:49.012923   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-372099
	
	I1212 21:09:49.012954   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.015698   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016076   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.016117   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.016245   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.016437   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016583   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.016710   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.016859   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.017308   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.017338   60948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-372099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-372099/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-372099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:09:49.144804   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:09:49.144842   60948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:09:49.144875   60948 buildroot.go:174] setting up certificates
	I1212 21:09:49.144885   60948 provision.go:83] configureAuth start
	I1212 21:09:49.144896   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetMachineName
	I1212 21:09:49.145181   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:49.147947   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148294   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.148340   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.148475   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.151218   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.151697   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.151760   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.152022   60948 provision.go:138] copyHostCerts
	I1212 21:09:49.152083   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:09:49.152102   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:09:49.152172   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:09:49.152299   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:09:49.152307   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:09:49.152335   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:09:49.152402   60948 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:09:49.152407   60948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:09:49.152428   60948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:09:49.152485   60948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-372099 san=[192.168.39.202 192.168.39.202 localhost 127.0.0.1 minikube old-k8s-version-372099]
	I1212 21:09:49.298406   60948 provision.go:172] copyRemoteCerts
	I1212 21:09:49.298478   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:09:49.298508   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.301384   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301696   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.301729   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.301948   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.302156   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.302320   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.302442   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.385046   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:09:49.409667   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:09:49.434002   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:09:49.458872   60948 provision.go:86] duration metric: configureAuth took 313.97378ms
	I1212 21:09:49.458907   60948 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:09:49.459075   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:09:49.459143   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.461794   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462131   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.462183   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.462373   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.462574   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462730   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.462857   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.463042   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.463594   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.463641   60948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:09:49.767652   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:09:49.767745   60948 machine.go:91] provisioned docker machine in 879.803204ms
	I1212 21:09:49.767772   60948 start.go:300] post-start starting for "old-k8s-version-372099" (driver="kvm2")
	I1212 21:09:49.767785   60948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:09:49.767812   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:49.768162   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:09:49.768191   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.770970   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771351   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.771388   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.771595   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.771805   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.772009   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.772155   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:49.857053   60948 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:09:49.861510   60948 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:09:49.861535   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:09:49.861600   60948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:09:49.861672   60948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:09:49.861781   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:09:49.869967   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:49.892746   60948 start.go:303] post-start completed in 124.959403ms
	I1212 21:09:49.892768   60948 fix.go:56] fixHost completed within 23.468514721s
	I1212 21:09:49.892790   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:49.895273   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895618   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:49.895653   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:49.895776   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:49.895951   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896097   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:49.896269   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:49.896433   60948 main.go:141] libmachine: Using SSH client type: native
	I1212 21:09:49.896887   60948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 21:09:49.896904   60948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:09:50.008384   60948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415389.953345991
	
	I1212 21:09:50.008407   60948 fix.go:206] guest clock: 1702415389.953345991
	I1212 21:09:50.008415   60948 fix.go:219] Guest: 2023-12-12 21:09:49.953345991 +0000 UTC Remote: 2023-12-12 21:09:49.89277138 +0000 UTC m=+292.853960893 (delta=60.574611ms)
	I1212 21:09:50.008441   60948 fix.go:190] guest clock delta is within tolerance: 60.574611ms
	I1212 21:09:50.008445   60948 start.go:83] releasing machines lock for "old-k8s-version-372099", held for 23.584233709s
	I1212 21:09:50.008469   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.008757   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:50.011577   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.011930   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.011958   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.012109   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012750   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.012964   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:09:50.013059   60948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:09:50.013102   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.013195   60948 ssh_runner.go:195] Run: cat /version.json
	I1212 21:09:50.013222   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:09:50.016031   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016304   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016525   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016566   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016720   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.016815   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:50.016855   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:50.016883   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017008   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:09:50.017080   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017186   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:09:50.017256   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.017357   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:09:50.017520   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:09:50.125100   60948 ssh_runner.go:195] Run: systemctl --version
	I1212 21:09:50.132264   60948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:09:50.278965   60948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:09:50.286230   60948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:09:50.286308   60948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:09:50.301165   60948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:09:50.301192   60948 start.go:475] detecting cgroup driver to use...
	I1212 21:09:50.301256   60948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:09:50.318715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:09:50.331943   60948 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:09:50.332013   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:09:50.348872   60948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:09:50.366970   60948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:09:50.492936   60948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:09:50.620103   60948 docker.go:219] disabling docker service ...
	I1212 21:09:50.620185   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:09:50.632962   60948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:09:50.644797   60948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:09:50.759039   60948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:09:50.884352   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:09:50.896549   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:09:50.919987   60948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 21:09:50.920056   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.932147   60948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:09:50.932224   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.941195   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.951010   60948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:09:50.962752   60948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:09:50.975125   60948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:09:50.984906   60948 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:09:50.984971   60948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:09:50.999594   60948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:09:51.010344   60948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:09:51.114607   60948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:09:51.318020   60948 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:09:51.318108   60948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:09:51.325048   60948 start.go:543] Will wait 60s for crictl version
	I1212 21:09:51.325134   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:51.329905   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:09:51.377974   60948 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:09:51.378075   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.444024   60948 ssh_runner.go:195] Run: crio --version
	I1212 21:09:51.512531   60948 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 21:09:51.514171   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetIP
	I1212 21:09:51.517083   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517636   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:09:51.517667   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:09:51.517886   60948 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 21:09:51.522137   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:51.538124   60948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 21:09:51.538191   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:51.594603   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:51.594688   60948 ssh_runner.go:195] Run: which lz4
	I1212 21:09:51.599732   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:09:51.604811   60948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:09:51.604844   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 21:09:50.033553   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Start
	I1212 21:09:50.033768   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring networks are active...
	I1212 21:09:50.034638   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network default is active
	I1212 21:09:50.035192   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Ensuring network mk-default-k8s-diff-port-171828 is active
	I1212 21:09:50.035630   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Getting domain xml...
	I1212 21:09:50.036380   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Creating domain...
	I1212 21:09:51.530274   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting to get IP...
	I1212 21:09:51.531329   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531766   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.531841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.531744   62039 retry.go:31] will retry after 271.90604ms: waiting for machine to come up
	I1212 21:09:51.805469   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806028   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:51.806062   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:51.805967   62039 retry.go:31] will retry after 338.221769ms: waiting for machine to come up
	I1212 21:09:47.488610   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.543731   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.543786   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.543807   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.654915   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:09:51.654949   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:09:51.989408   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:51.996278   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:51.996337   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.488734   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.496289   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:09:52.496327   60833 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:09:52.989065   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:09:52.997013   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:09:53.012736   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:09:53.012777   60833 api_server.go:131] duration metric: took 6.025395735s to wait for apiserver health ...
	I1212 21:09:53.012789   60833 cni.go:84] Creating CNI manager for ""
	I1212 21:09:53.012806   60833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:53.014820   60833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:09:53.016797   60833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:09:53.047434   60833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:09:53.095811   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:09:53.115354   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:09:53.115441   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:09:53.115465   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:09:53.115504   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:09:53.115532   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:09:53.115551   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:09:53.115582   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:09:53.115607   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:09:53.115633   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:09:53.115643   60833 system_pods.go:74] duration metric: took 19.808922ms to wait for pod list to return data ...
	I1212 21:09:53.115655   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:09:53.127006   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:09:53.127044   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:09:53.127058   60833 node_conditions.go:105] duration metric: took 11.39604ms to run NodePressure ...
	I1212 21:09:53.127079   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:09:53.597509   60833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603447   60833 kubeadm.go:787] kubelet initialised
	I1212 21:09:53.603476   60833 kubeadm.go:788] duration metric: took 5.932359ms waiting for restarted kubelet to initialise ...
	I1212 21:09:53.603486   60833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:53.616570   60833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.623514   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623547   60833 pod_ready.go:81] duration metric: took 6.940441ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.623560   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.623570   60833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.631395   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631426   60833 pod_ready.go:81] duration metric: took 7.844548ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.631438   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "etcd-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.631453   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.649647   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649681   60833 pod_ready.go:81] duration metric: took 18.215042ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.649693   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.649702   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:53.662239   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662271   60833 pod_ready.go:81] duration metric: took 12.552977ms waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:53.662285   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:53.662298   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.005841   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005879   60833 pod_ready.go:81] duration metric: took 343.569867ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.005892   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-proxy-nsv4w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.005908   60833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.403249   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403280   60833 pod_ready.go:81] duration metric: took 397.363687ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.403291   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.403297   60833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:09:54.802330   60833 pod_ready.go:97] node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802367   60833 pod_ready.go:81] duration metric: took 399.057426ms waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:09:54.802380   60833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831188" hosting pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:54.802390   60833 pod_ready.go:38] duration metric: took 1.198894195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:09:54.802413   60833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:09:54.822125   60833 ops.go:34] apiserver oom_adj: -16
	I1212 21:09:54.822154   60833 kubeadm.go:640] restartCluster took 21.052529291s
	I1212 21:09:54.822173   60833 kubeadm.go:406] StartCluster complete in 21.101061651s
	I1212 21:09:54.822194   60833 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.822273   60833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:09:54.825185   60833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:54.825490   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:09:54.825622   60833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:09:54.825714   60833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831188"
	I1212 21:09:54.825735   60833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-831188"
	W1212 21:09:54.825756   60833 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:09:54.825806   60833 addons.go:69] Setting metrics-server=true in profile "embed-certs-831188"
	I1212 21:09:54.825837   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.825849   60833 addons.go:231] Setting addon metrics-server=true in "embed-certs-831188"
	W1212 21:09:54.825863   60833 addons.go:240] addon metrics-server should already be in state true
	I1212 21:09:54.825969   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.826276   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826309   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826522   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.826588   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.826731   60833 config.go:182] Loaded profile config "embed-certs-831188": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:09:54.826767   60833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831188"
	I1212 21:09:54.826847   60833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831188"
	I1212 21:09:54.827349   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.827409   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.834506   60833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-831188" context rescaled to 1 replicas
	I1212 21:09:54.834614   60833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:09:54.837122   60833 out.go:177] * Verifying Kubernetes components...
	I1212 21:09:54.839094   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:09:54.846081   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I1212 21:09:54.846737   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847078   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I1212 21:09:54.847367   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.847387   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.847518   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.847775   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848031   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.848053   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.848061   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.848355   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.848912   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.848955   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.849635   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1212 21:09:54.849986   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.852255   60833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-831188"
	W1212 21:09:54.852279   60833 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:09:54.852306   60833 host.go:66] Checking if "embed-certs-831188" exists ...
	I1212 21:09:54.852727   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.852758   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.853259   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.853289   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.853643   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.854187   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.854223   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.870249   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I1212 21:09:54.870805   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.871406   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.871430   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.871920   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.872090   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.873692   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.876011   60833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:54.874681   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1212 21:09:54.877102   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1212 21:09:54.877666   60833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:54.877691   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:09:54.877710   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.877993   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878108   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.878602   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878622   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.878738   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.878754   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.879004   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.879362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.879426   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.880445   60833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:09:54.880486   60833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:09:54.881642   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.883715   60833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:09:54.885165   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:09:54.885184   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:09:54.885199   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.883021   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.883884   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.885257   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.885295   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.885442   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.885598   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.885727   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.893093   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893096   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.893152   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.893190   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.893362   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.893534   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.893676   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:54.902833   60833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1212 21:09:54.903320   60833 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:09:54.903867   60833 main.go:141] libmachine: Using API Version  1
	I1212 21:09:54.903888   60833 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:09:54.904337   60833 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:09:54.904535   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetState
	I1212 21:09:54.906183   60833 main.go:141] libmachine: (embed-certs-831188) Calling .DriverName
	I1212 21:09:54.906443   60833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:54.906463   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:09:54.906484   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHHostname
	I1212 21:09:54.909330   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.909914   60833 main.go:141] libmachine: (embed-certs-831188) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:50:cf", ip: ""} in network mk-embed-certs-831188: {Iface:virbr2 ExpiryTime:2023-12-12 22:01:16 +0000 UTC Type:0 Mac:52:54:00:58:50:cf Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:embed-certs-831188 Clientid:01:52:54:00:58:50:cf}
	I1212 21:09:54.909954   60833 main.go:141] libmachine: (embed-certs-831188) DBG | domain embed-certs-831188 has defined IP address 192.168.50.163 and MAC address 52:54:00:58:50:cf in network mk-embed-certs-831188
	I1212 21:09:54.910136   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHPort
	I1212 21:09:54.910328   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHKeyPath
	I1212 21:09:54.910492   60833 main.go:141] libmachine: (embed-certs-831188) Calling .GetSSHUsername
	I1212 21:09:54.910639   60833 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/embed-certs-831188/id_rsa Username:docker}
	I1212 21:09:55.020642   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:09:55.123475   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:09:55.141398   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:09:55.141429   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:09:55.200799   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:09:55.200833   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:09:55.275142   60833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:55.275172   60833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:09:55.308985   60833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831188" to be "Ready" ...
	I1212 21:09:55.309133   60833 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:09:55.341251   60833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:09:56.829715   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706199185s)
	I1212 21:09:56.829768   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829780   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.829784   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.809111646s)
	I1212 21:09:56.829860   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.829870   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830143   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.830166   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.830178   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.830188   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.830267   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.831959   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832013   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.832048   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.831765   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831788   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.831794   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.832139   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.832236   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.833156   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.833196   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:56.843517   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:56.843542   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:56.843815   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:56.843870   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:56.843880   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.023745   60833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.682445607s)
	I1212 21:09:57.023801   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.023815   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024252   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024263   60833 main.go:141] libmachine: (embed-certs-831188) DBG | Closing plugin on server side
	I1212 21:09:57.024276   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024287   60833 main.go:141] libmachine: Making call to close driver server
	I1212 21:09:57.024303   60833 main.go:141] libmachine: (embed-certs-831188) Calling .Close
	I1212 21:09:57.024676   60833 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:09:57.024691   60833 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:09:57.024706   60833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-831188"
	I1212 21:09:53.564404   60948 crio.go:444] Took 1.964711 seconds to copy over tarball
	I1212 21:09:53.564488   60948 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:09:57.052627   60948 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.488106402s)
	I1212 21:09:57.052657   60948 crio.go:451] Took 3.488218 seconds to extract the tarball
	I1212 21:09:57.052669   60948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:09:52.145724   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146453   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.146484   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.146352   62039 retry.go:31] will retry after 482.98499ms: waiting for machine to come up
	I1212 21:09:52.630862   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631317   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:52.631343   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:52.631232   62039 retry.go:31] will retry after 480.323704ms: waiting for machine to come up
	I1212 21:09:53.113661   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114344   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.114372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.114249   62039 retry.go:31] will retry after 649.543956ms: waiting for machine to come up
	I1212 21:09:53.765102   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765613   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:53.765643   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:53.765558   62039 retry.go:31] will retry after 824.137815ms: waiting for machine to come up
	I1212 21:09:54.591782   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592356   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:54.592391   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:54.592273   62039 retry.go:31] will retry after 874.563899ms: waiting for machine to come up
	I1212 21:09:55.468934   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469429   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:55.469459   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:55.469393   62039 retry.go:31] will retry after 1.224276076s: waiting for machine to come up
	I1212 21:09:56.695111   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695604   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:56.695637   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:56.695560   62039 retry.go:31] will retry after 1.207984075s: waiting for machine to come up
	I1212 21:09:57.157310   60833 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:09:57.322702   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:09:57.093318   60948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:09:57.723104   60948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 21:09:57.723132   60948 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:09:57.723259   60948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.723342   60948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.723442   60948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.723317   60948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 21:09:57.723302   60948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.723297   60948 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724835   60948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 21:09:57.724864   60948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:57.724861   60948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.724836   60948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.724853   60948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.724842   60948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.724847   60948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.724893   60948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.918047   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.920893   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:57.927072   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:57.928080   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:57.931259   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 21:09:57.932017   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:57.939580   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 21:09:57.990594   60948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 21:09:57.990667   60948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:57.990724   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.059759   60948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:09:58.095401   60948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 21:09:58.095451   60948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.095504   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138192   60948 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 21:09:58.138287   60948 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.138333   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.138491   60948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 21:09:58.138532   60948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.138594   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145060   60948 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 21:09:58.145116   60948 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 21:09:58.145146   60948 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 21:09:58.145177   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145185   60948 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.145225   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145073   60948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 21:09:58.145250   60948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.145271   60948 ssh_runner.go:195] Run: which crictl
	I1212 21:09:58.145322   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 21:09:58.268621   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 21:09:58.268721   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 21:09:58.268774   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 21:09:58.268826   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 21:09:58.268863   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 21:09:58.268895   60948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 21:09:58.268956   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 21:09:58.408748   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 21:09:58.418795   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 21:09:58.418843   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 21:09:58.420451   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 21:09:58.420516   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 21:09:58.420577   60948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.420585   60948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 21:09:58.425621   60948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 21:09:58.425639   60948 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 21:09:58.425684   60948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 21:09:59.172682   60948 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 21:09:59.172736   60948 cache_images.go:92] LoadImages completed in 1.449590507s
	W1212 21:09:59.172819   60948 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 21:09:59.172900   60948 ssh_runner.go:195] Run: crio config
	I1212 21:09:59.238502   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:09:59.238522   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:09:59.238539   60948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:09:59.238560   60948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-372099 NodeName:old-k8s-version-372099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 21:09:59.238733   60948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-372099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-372099
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.202:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:09:59.238886   60948 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-372099 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:09:59.238953   60948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 21:09:59.249183   60948 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:09:59.249271   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:09:59.263171   60948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 21:09:59.281172   60948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:09:59.302622   60948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 21:09:59.323131   60948 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1212 21:09:59.327344   60948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:09:59.342182   60948 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099 for IP: 192.168.39.202
	I1212 21:09:59.342216   60948 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:09:59.342412   60948 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:09:59.342465   60948 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:09:59.342554   60948 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/client.key
	I1212 21:09:59.342659   60948 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key.9e66e972
	I1212 21:09:59.342723   60948 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key
	I1212 21:09:59.342854   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:09:59.342891   60948 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:09:59.342908   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:09:59.342947   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:09:59.342984   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:09:59.343024   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:09:59.343081   60948 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:09:59.343948   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:09:59.375250   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:09:59.404892   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:09:59.434762   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/old-k8s-version-372099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:09:59.465696   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:09:59.496528   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:09:59.521739   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:09:59.545606   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:09:59.574153   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:09:59.599089   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:09:59.625217   60948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:09:59.654715   60948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:09:59.674946   60948 ssh_runner.go:195] Run: openssl version
	I1212 21:09:59.683295   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:09:59.697159   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702671   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.702745   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:09:59.710931   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:09:59.723204   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:09:59.735713   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741621   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.741715   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:09:59.748041   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:09:59.760217   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:09:59.772701   60948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778501   60948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.778589   60948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:09:59.787066   60948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:09:59.803355   60948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:09:59.809920   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:09:59.819093   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:09:59.827918   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:09:59.836228   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:09:59.845437   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:09:59.852647   60948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:09:59.861170   60948 kubeadm.go:404] StartCluster: {Name:old-k8s-version-372099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-372099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:09:59.861285   60948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:09:59.861358   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:09:59.906807   60948 cri.go:89] found id: ""
	I1212 21:09:59.906885   60948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:09:59.919539   60948 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:09:59.919579   60948 kubeadm.go:636] restartCluster start
	I1212 21:09:59.919637   60948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:09:59.930547   60948 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.931845   60948 kubeconfig.go:92] found "old-k8s-version-372099" server: "https://192.168.39.202:8443"
	I1212 21:09:59.934471   60948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:09:59.945701   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.945780   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.959415   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:59.959438   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:09:59.959496   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:09:59.975677   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.476388   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.476469   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.493781   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:00.976367   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:00.976475   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:00.993084   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.476277   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.476362   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.490076   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:01.976393   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:01.976505   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:01.990771   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:09:57.905327   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:57.905730   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:57.905649   62039 retry.go:31] will retry after 1.427858275s: waiting for machine to come up
	I1212 21:09:59.335284   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:09:59.335735   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:09:59.335630   62039 retry.go:31] will retry after 1.773169552s: waiting for machine to come up
	I1212 21:10:01.110044   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110533   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:01.110567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:01.110468   62039 retry.go:31] will retry after 2.199207847s: waiting for machine to come up
	I1212 21:09:57.672094   60833 addons.go:502] enable addons completed in 2.846462968s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:09:59.822907   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:01.824673   60833 node_ready.go:58] node "embed-certs-831188" has status "Ready":"False"
	I1212 21:10:02.325980   60833 node_ready.go:49] node "embed-certs-831188" has status "Ready":"True"
	I1212 21:10:02.326008   60833 node_ready.go:38] duration metric: took 7.016985612s waiting for node "embed-certs-831188" to be "Ready" ...
	I1212 21:10:02.326021   60833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:02.339547   60833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345609   60833 pod_ready.go:92] pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.345638   60833 pod_ready.go:81] duration metric: took 6.052243ms waiting for pod "coredns-5dd5756b68-zj5wn" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.345652   60833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.476354   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.476429   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.489326   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:02.975846   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:02.975935   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:02.992975   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.476463   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.476577   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.489471   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.975762   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:03.975891   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:03.992773   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.476395   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.476510   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.489163   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:04.976403   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:04.976503   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:04.990508   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.475988   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.476108   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.489347   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:05.975811   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:05.975874   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:05.988996   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.475817   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.475896   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.487886   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:06.976376   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:06.976445   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:06.988627   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:03.312460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312859   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:03.312892   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:03.312807   62039 retry.go:31] will retry after 4.329332977s: waiting for machine to come up
	I1212 21:10:02.864894   60833 pod_ready.go:92] pod "etcd-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.864921   60833 pod_ready.go:81] duration metric: took 519.26143ms waiting for pod "etcd-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.864935   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871360   60833 pod_ready.go:92] pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:02.871392   60833 pod_ready.go:81] duration metric: took 6.449389ms waiting for pod "kube-apiserver-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:02.871406   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529203   60833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.529228   60833 pod_ready.go:81] duration metric: took 1.657813273s waiting for pod "kube-controller-manager-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.529243   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722607   60833 pod_ready.go:92] pod "kube-proxy-nsv4w" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:04.722631   60833 pod_ready.go:81] duration metric: took 193.381057ms waiting for pod "kube-proxy-nsv4w" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:04.722641   60833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124360   60833 pod_ready.go:92] pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:05.124388   60833 pod_ready.go:81] duration metric: took 401.739767ms waiting for pod "kube-scheduler-embed-certs-831188" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:05.124401   60833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:07.476521   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.476603   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.487362   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:07.976016   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:07.976101   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:07.987221   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.475793   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.475894   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.486641   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:08.976140   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:08.976262   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:08.987507   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.476080   60948 api_server.go:166] Checking apiserver status ...
	I1212 21:10:09.476168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:09.487537   60948 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:09.946342   60948 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:09.946377   60948 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:09.946412   60948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:09.946487   60948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:09.988850   60948 cri.go:89] found id: ""
	I1212 21:10:09.988939   60948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:10.004726   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:10.015722   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:10.015787   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025706   60948 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:10.025743   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:10.156614   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.030056   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.219060   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.315587   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:11.398016   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:11.398110   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.411642   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:11.927297   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:07.644473   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644921   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | unable to find current IP address of domain default-k8s-diff-port-171828 in network mk-default-k8s-diff-port-171828
	I1212 21:10:07.644950   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | I1212 21:10:07.644868   62039 retry.go:31] will retry after 5.180616294s: waiting for machine to come up
	I1212 21:10:07.428366   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:09.929940   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.157275   60628 start.go:369] acquired machines lock for "no-preload-343495" in 1m3.684137096s
	I1212 21:10:14.157330   60628 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:10:14.157342   60628 fix.go:54] fixHost starting: 
	I1212 21:10:14.157767   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:14.157812   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:14.175936   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 21:10:14.176421   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:14.176957   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:10:14.176982   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:14.177380   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:14.177601   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:14.177804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:10:14.179672   60628 fix.go:102] recreateIfNeeded on no-preload-343495: state=Stopped err=<nil>
	I1212 21:10:14.179696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	W1212 21:10:14.179911   60628 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 21:10:14.183064   60628 out.go:177] * Restarting existing kvm2 VM for "no-preload-343495" ...
	I1212 21:10:12.828825   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.829471   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Found IP for machine: 192.168.72.253
	I1212 21:10:12.829501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserving static IP address...
	I1212 21:10:12.829530   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has current primary IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.830061   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.830110   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | skip adding static IP to network mk-default-k8s-diff-port-171828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-171828", mac: "52:54:00:65:ee:fd", ip: "192.168.72.253"}
	I1212 21:10:12.830133   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Reserved static IP address: 192.168.72.253
	I1212 21:10:12.830152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Getting to WaitForSSH function...
	I1212 21:10:12.830163   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Waiting for SSH to be available...
	I1212 21:10:12.832654   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833033   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.833065   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.833273   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH client type: external
	I1212 21:10:12.833302   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa (-rw-------)
	I1212 21:10:12.833335   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:12.833352   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | About to run SSH command:
	I1212 21:10:12.833370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | exit 0
	I1212 21:10:12.931871   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:12.932439   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetConfigRaw
	I1212 21:10:12.933250   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:12.936555   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937009   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.937051   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.937341   61298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/config.json ...
	I1212 21:10:12.937642   61298 machine.go:88] provisioning docker machine ...
	I1212 21:10:12.937669   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:12.937933   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938136   61298 buildroot.go:166] provisioning hostname "default-k8s-diff-port-171828"
	I1212 21:10:12.938161   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:12.938373   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:12.941209   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941589   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:12.941620   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:12.941796   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:12.941978   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942183   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:12.942357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:12.942539   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:12.942885   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:12.942904   61298 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-171828 && echo "default-k8s-diff-port-171828" | sudo tee /etc/hostname
	I1212 21:10:13.099123   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-171828
	
	I1212 21:10:13.099152   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.102085   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102460   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.102496   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.102756   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.102965   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103166   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.103370   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.103580   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.104000   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.104034   61298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-171828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-171828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-171828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:13.246501   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:13.246535   61298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:13.246561   61298 buildroot.go:174] setting up certificates
	I1212 21:10:13.246577   61298 provision.go:83] configureAuth start
	I1212 21:10:13.246590   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetMachineName
	I1212 21:10:13.246875   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:13.249703   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250010   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.250043   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.250196   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.252501   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.252814   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.252852   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.253086   61298 provision.go:138] copyHostCerts
	I1212 21:10:13.253151   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:13.253171   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:13.253266   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:13.253399   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:13.253412   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:13.253437   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:13.253501   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:13.253508   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:13.253526   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:13.253586   61298 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-171828 san=[192.168.72.253 192.168.72.253 localhost 127.0.0.1 minikube default-k8s-diff-port-171828]
	I1212 21:10:13.331755   61298 provision.go:172] copyRemoteCerts
	I1212 21:10:13.331819   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:13.331841   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.334412   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334741   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.334777   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.334981   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.335185   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.335369   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.335498   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.429448   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:10:13.454350   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:13.479200   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 21:10:13.505120   61298 provision.go:86] duration metric: configureAuth took 258.53005ms
	I1212 21:10:13.505151   61298 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:13.505370   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:13.505451   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.508400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.508826   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.508858   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.509144   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.509360   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509524   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.509677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.509829   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:13.510161   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:13.510184   61298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:13.874783   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:13.874810   61298 machine.go:91] provisioned docker machine in 937.151566ms
	I1212 21:10:13.874822   61298 start.go:300] post-start starting for "default-k8s-diff-port-171828" (driver="kvm2")
	I1212 21:10:13.874835   61298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:13.874853   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:13.875182   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:13.875213   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:13.877937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878357   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:13.878400   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:13.878640   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:13.878819   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:13.878984   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:13.879148   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:13.978276   61298 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:13.984077   61298 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:13.984114   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:13.984229   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:13.984309   61298 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:13.984391   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:13.996801   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:14.021773   61298 start.go:303] post-start completed in 146.935628ms
	I1212 21:10:14.021796   61298 fix.go:56] fixHost completed within 24.013191129s
	I1212 21:10:14.021815   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.024847   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.025227   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.025372   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.025599   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025788   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.025951   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.026106   61298 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:14.026436   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I1212 21:10:14.026452   61298 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:14.157053   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415414.138141396
	
	I1212 21:10:14.157082   61298 fix.go:206] guest clock: 1702415414.138141396
	I1212 21:10:14.157092   61298 fix.go:219] Guest: 2023-12-12 21:10:14.138141396 +0000 UTC Remote: 2023-12-12 21:10:14.021800288 +0000 UTC m=+251.962428882 (delta=116.341108ms)
	I1212 21:10:14.157130   61298 fix.go:190] guest clock delta is within tolerance: 116.341108ms
	I1212 21:10:14.157141   61298 start.go:83] releasing machines lock for "default-k8s-diff-port-171828", held for 24.148576854s
	I1212 21:10:14.157193   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.157567   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:14.160748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161134   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.161172   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.161489   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162089   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162259   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:14.162333   61298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:14.162389   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.162627   61298 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:14.162652   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:14.165726   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.165941   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166485   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166548   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166598   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:14.166636   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:14.166649   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166905   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:14.166907   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167104   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167153   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:14.167231   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.167349   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:14.167500   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:14.294350   61298 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:14.301705   61298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:14.459967   61298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:14.467979   61298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:14.468043   61298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:14.483883   61298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:14.483910   61298 start.go:475] detecting cgroup driver to use...
	I1212 21:10:14.483976   61298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:14.498105   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:14.511716   61298 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:14.511784   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:14.525795   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:14.539213   61298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:14.658453   61298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:14.786222   61298 docker.go:219] disabling docker service ...
	I1212 21:10:14.786296   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:14.801656   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:14.814821   61298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:14.950542   61298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:15.085306   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:15.098508   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:15.118634   61298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:15.118709   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.130579   61298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:15.130667   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.140672   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.150340   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:15.161966   61298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:15.173049   61298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:15.181620   61298 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:15.181703   61298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:15.195505   61298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:15.204076   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:15.327587   61298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:15.505003   61298 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:15.505078   61298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:15.512282   61298 start.go:543] Will wait 60s for crictl version
	I1212 21:10:15.512349   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:10:15.516564   61298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:15.556821   61298 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:15.556906   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.612743   61298 ssh_runner.go:195] Run: crio --version
	I1212 21:10:15.665980   61298 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 21:10:12.426883   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.927168   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:12.962834   60948 api_server.go:72] duration metric: took 1.56481721s to wait for apiserver process to appear ...
	I1212 21:10:12.962862   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:12.962890   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.963447   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:12.963489   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:12.964022   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": dial tcp 192.168.39.202:8443: connect: connection refused
	I1212 21:10:13.464393   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:15.667323   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetIP
	I1212 21:10:15.670368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.670769   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:15.670804   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:15.671037   61298 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:15.675575   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:15.688523   61298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 21:10:15.688602   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:15.739601   61298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 21:10:15.739718   61298 ssh_runner.go:195] Run: which lz4
	I1212 21:10:15.744272   61298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 21:10:15.749574   61298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:10:15.749612   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 21:10:12.428614   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.430542   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:16.442797   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:14.184429   60628 main.go:141] libmachine: (no-preload-343495) Calling .Start
	I1212 21:10:14.184692   60628 main.go:141] libmachine: (no-preload-343495) Ensuring networks are active...
	I1212 21:10:14.186580   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network default is active
	I1212 21:10:14.187398   60628 main.go:141] libmachine: (no-preload-343495) Ensuring network mk-no-preload-343495 is active
	I1212 21:10:14.188587   60628 main.go:141] libmachine: (no-preload-343495) Getting domain xml...
	I1212 21:10:14.189457   60628 main.go:141] libmachine: (no-preload-343495) Creating domain...
	I1212 21:10:15.509306   60628 main.go:141] libmachine: (no-preload-343495) Waiting to get IP...
	I1212 21:10:15.510320   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.510728   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.510772   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.510702   62255 retry.go:31] will retry after 275.567053ms: waiting for machine to come up
	I1212 21:10:15.788793   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:15.789233   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:15.789262   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:15.789193   62255 retry.go:31] will retry after 341.343409ms: waiting for machine to come up
	I1212 21:10:16.131936   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.132427   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.132452   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.132377   62255 retry.go:31] will retry after 302.905542ms: waiting for machine to come up
	I1212 21:10:16.437184   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.437944   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.437968   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.437850   62255 retry.go:31] will retry after 407.178114ms: waiting for machine to come up
	I1212 21:10:16.846738   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:16.847393   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:16.847429   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:16.847349   62255 retry.go:31] will retry after 507.703222ms: waiting for machine to come up
	I1212 21:10:17.357373   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:17.357975   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:17.358005   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:17.357907   62255 retry.go:31] will retry after 920.403188ms: waiting for machine to come up
	I1212 21:10:18.464726   60948 api_server.go:269] stopped: https://192.168.39.202:8443/healthz: Get "https://192.168.39.202:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 21:10:18.464781   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.736922   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.736969   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.736990   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:19.816132   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:19.816165   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:19.964508   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.012996   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.013048   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.464538   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:20.509558   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 21:10:20.509601   60948 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 21:10:20.965183   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:10:21.369579   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:10:21.381334   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:10:21.381365   60948 api_server.go:131] duration metric: took 8.418495294s to wait for apiserver health ...
	I1212 21:10:21.381378   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.381385   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.501371   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:21.801933   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:21.827010   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:21.853900   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:17.641827   61298 crio.go:444] Took 1.897583 seconds to copy over tarball
	I1212 21:10:17.641919   61298 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:10:21.283045   61298 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.641094924s)
	I1212 21:10:21.283076   61298 crio.go:451] Took 3.641222 seconds to extract the tarball
	I1212 21:10:21.283088   61298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:10:21.328123   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:21.387894   61298 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 21:10:21.387923   61298 cache_images.go:84] Images are preloaded, skipping loading
	I1212 21:10:21.387996   61298 ssh_runner.go:195] Run: crio config
	I1212 21:10:21.467191   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:21.467216   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:21.467255   61298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:21.467278   61298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-171828 NodeName:default-k8s-diff-port-171828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:21.467443   61298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-171828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:21.467537   61298 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-171828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 21:10:21.467596   61298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 21:10:21.478940   61298 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:21.479024   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:21.492604   61298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 21:10:21.514260   61298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:10:21.535059   61298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 21:10:21.557074   61298 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:21.562765   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:21.578989   61298 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828 for IP: 192.168.72.253
	I1212 21:10:21.579047   61298 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:21.579282   61298 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:21.579383   61298 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:21.579495   61298 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/client.key
	I1212 21:10:21.768212   61298 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key.a1600f99
	I1212 21:10:21.768305   61298 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key
	I1212 21:10:21.768447   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:21.768489   61298 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:21.768504   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:21.768542   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:21.768596   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:21.768625   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:21.768680   61298 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:21.769557   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:21.800794   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:10:21.833001   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:21.864028   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/default-k8s-diff-port-171828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:21.893107   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:21.918580   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:21.944095   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:21.970251   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:21.998947   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:22.027620   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:22.056851   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:22.084321   61298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:22.103273   61298 ssh_runner.go:195] Run: openssl version
	I1212 21:10:22.109518   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:18.932477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:21.431431   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:18.280164   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:18.280656   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:18.280687   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:18.280612   62255 retry.go:31] will retry after 761.825655ms: waiting for machine to come up
	I1212 21:10:19.043686   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:19.044170   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:19.044203   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:19.044117   62255 retry.go:31] will retry after 1.173408436s: waiting for machine to come up
	I1212 21:10:20.218938   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:20.219457   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:20.219488   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:20.219412   62255 retry.go:31] will retry after 1.484817124s: waiting for machine to come up
	I1212 21:10:21.706027   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:21.706505   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:21.706536   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:21.706467   62255 retry.go:31] will retry after 2.260831172s: waiting for machine to come up
	I1212 21:10:22.159195   60948 system_pods.go:59] 7 kube-system pods found
	I1212 21:10:22.284903   60948 system_pods.go:61] "coredns-5644d7b6d9-slvnx" [0db32241-69df-48dc-a60f-6921f9c5746f] Running
	I1212 21:10:22.284916   60948 system_pods.go:61] "etcd-old-k8s-version-372099" [72d219cb-b393-423d-ba62-b880bd2d26a0] Running
	I1212 21:10:22.284924   60948 system_pods.go:61] "kube-apiserver-old-k8s-version-372099" [c4f09d2d-07d2-4403-886b-37cb1471e7e5] Running
	I1212 21:10:22.284932   60948 system_pods.go:61] "kube-controller-manager-old-k8s-version-372099" [4a17c60c-2c72-4296-a7e4-0ae05e7bfa39] Running
	I1212 21:10:22.284939   60948 system_pods.go:61] "kube-proxy-5mvzb" [ec7c6540-35e2-4ae4-8592-d797132a8328] Running
	I1212 21:10:22.284945   60948 system_pods.go:61] "kube-scheduler-old-k8s-version-372099" [472284a4-9340-4bbc-8a1f-b9b55f4b0c3c] Running
	I1212 21:10:22.284952   60948 system_pods.go:61] "storage-provisioner" [b9fcec5f-bd1f-4c47-95cd-a9c8e3011e50] Running
	I1212 21:10:22.284961   60948 system_pods.go:74] duration metric: took 431.035724ms to wait for pod list to return data ...
	I1212 21:10:22.284990   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:22.592700   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:22.592734   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:22.592748   60948 node_conditions.go:105] duration metric: took 307.751463ms to run NodePressure ...
	I1212 21:10:22.592770   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:23.483331   60948 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:23.500661   60948 retry.go:31] will retry after 162.846257ms: kubelet not initialised
	I1212 21:10:23.669569   60948 retry.go:31] will retry after 257.344573ms: kubelet not initialised
	I1212 21:10:23.942373   60948 retry.go:31] will retry after 538.191385ms: kubelet not initialised
	I1212 21:10:24.487436   60948 retry.go:31] will retry after 635.824669ms: kubelet not initialised
	I1212 21:10:25.129226   60948 retry.go:31] will retry after 946.117517ms: kubelet not initialised
	I1212 21:10:26.082106   60948 retry.go:31] will retry after 2.374588936s: kubelet not initialised
	I1212 21:10:22.121093   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291519   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.291585   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:22.297989   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:22.309847   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:22.321817   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326715   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.326766   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:22.333001   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:22.345044   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:22.357827   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362795   61298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.362858   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:22.368864   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:22.380605   61298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:22.385986   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:22.392931   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:22.399683   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:22.407203   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:22.414730   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:22.421808   61298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:22.430050   61298 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-171828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-171828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:22.430205   61298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:22.430263   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:22.482907   61298 cri.go:89] found id: ""
	I1212 21:10:22.482981   61298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:22.495001   61298 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:22.495032   61298 kubeadm.go:636] restartCluster start
	I1212 21:10:22.495104   61298 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:22.506418   61298 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.508078   61298 kubeconfig.go:92] found "default-k8s-diff-port-171828" server: "https://192.168.72.253:8444"
	I1212 21:10:22.511809   61298 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:22.523641   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.523703   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.536887   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:22.536913   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:22.536965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:22.549418   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.050111   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.050218   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.063845   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.550201   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:23.550303   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:23.567468   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.050021   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.050193   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.064792   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:24.550119   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:24.550213   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:24.568169   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.049891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.049997   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.063341   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:25.549592   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:25.549682   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:25.564096   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.049596   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.049701   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.063482   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:26.549680   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:26.549793   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:26.563956   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.049482   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.049614   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.062881   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:23.440487   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:25.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:23.969715   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:23.970242   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:23.970272   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:23.970200   62255 retry.go:31] will retry after 1.769886418s: waiting for machine to come up
	I1212 21:10:25.741628   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:25.742060   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:25.742098   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:25.742014   62255 retry.go:31] will retry after 2.283589137s: waiting for machine to come up
	I1212 21:10:28.462838   60948 retry.go:31] will retry after 1.809333362s: kubelet not initialised
	I1212 21:10:30.278747   60948 retry.go:31] will retry after 4.059791455s: kubelet not initialised
	I1212 21:10:27.550084   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:27.550176   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:27.564365   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.049688   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.049771   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.065367   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:28.549922   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:28.550009   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:28.566964   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.049535   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.049643   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.062264   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:29.549891   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:29.549970   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:29.563687   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.050397   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.050492   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.065602   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:30.550210   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:30.550298   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:30.562793   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.050281   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.050374   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.064836   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:31.550407   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:31.550527   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:31.563474   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:32.049593   61298 api_server.go:166] Checking apiserver status ...
	I1212 21:10:32.049689   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:32.062459   61298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:27.935166   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:30.429274   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:28.028345   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:28.028796   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:28.028824   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:28.028757   62255 retry.go:31] will retry after 4.021160394s: waiting for machine to come up
	I1212 21:10:32.052992   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:32.053479   60628 main.go:141] libmachine: (no-preload-343495) DBG | unable to find current IP address of domain no-preload-343495 in network mk-no-preload-343495
	I1212 21:10:32.053506   60628 main.go:141] libmachine: (no-preload-343495) DBG | I1212 21:10:32.053442   62255 retry.go:31] will retry after 4.864494505s: waiting for machine to come up
	I1212 21:10:34.344571   60948 retry.go:31] will retry after 9.338953291s: kubelet not initialised
	I1212 21:10:32.524460   61298 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:10:32.524492   61298 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:10:32.524523   61298 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:10:32.524586   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:32.565596   61298 cri.go:89] found id: ""
	I1212 21:10:32.565685   61298 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:10:32.582458   61298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:10:32.592539   61298 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:10:32.592615   61298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603658   61298 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:10:32.603683   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:32.730418   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.535390   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.742601   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.839081   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:33.909128   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:10:33.909209   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:33.928197   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.452146   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:34.952473   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.452270   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:35.952431   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.451626   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:10:36.482100   61298 api_server.go:72] duration metric: took 2.572973799s to wait for apiserver process to appear ...
	I1212 21:10:36.482125   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:10:36.482154   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.482833   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.482869   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:36.483345   61298 api_server.go:269] stopped: https://192.168.72.253:8444/healthz: Get "https://192.168.72.253:8444/healthz": dial tcp 192.168.72.253:8444: connect: connection refused
	I1212 21:10:36.984105   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:32.433032   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:34.928686   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.930503   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:36.920697   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921201   60628 main.go:141] libmachine: (no-preload-343495) Found IP for machine: 192.168.61.176
	I1212 21:10:36.921235   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has current primary IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.921248   60628 main.go:141] libmachine: (no-preload-343495) Reserving static IP address...
	I1212 21:10:36.921719   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.921757   60628 main.go:141] libmachine: (no-preload-343495) DBG | skip adding static IP to network mk-no-preload-343495 - found existing host DHCP lease matching {name: "no-preload-343495", mac: "52:54:00:60:91:03", ip: "192.168.61.176"}
	I1212 21:10:36.921770   60628 main.go:141] libmachine: (no-preload-343495) Reserved static IP address: 192.168.61.176
	I1212 21:10:36.921785   60628 main.go:141] libmachine: (no-preload-343495) Waiting for SSH to be available...
	I1212 21:10:36.921802   60628 main.go:141] libmachine: (no-preload-343495) DBG | Getting to WaitForSSH function...
	I1212 21:10:36.924581   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.924908   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:36.924941   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:36.925154   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH client type: external
	I1212 21:10:36.925191   60628 main.go:141] libmachine: (no-preload-343495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa (-rw-------)
	I1212 21:10:36.925223   60628 main.go:141] libmachine: (no-preload-343495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 21:10:36.925234   60628 main.go:141] libmachine: (no-preload-343495) DBG | About to run SSH command:
	I1212 21:10:36.925246   60628 main.go:141] libmachine: (no-preload-343495) DBG | exit 0
	I1212 21:10:37.059619   60628 main.go:141] libmachine: (no-preload-343495) DBG | SSH cmd err, output: <nil>: 
	I1212 21:10:37.060017   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetConfigRaw
	I1212 21:10:37.060752   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.063599   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064325   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.064365   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.064468   60628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/config.json ...
	I1212 21:10:37.064705   60628 machine.go:88] provisioning docker machine ...
	I1212 21:10:37.064733   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:37.064938   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065115   60628 buildroot.go:166] provisioning hostname "no-preload-343495"
	I1212 21:10:37.065144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.065286   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.068118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068517   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.068548   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.068804   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.068980   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069141   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.069312   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.069507   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.069958   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.069985   60628 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-343495 && echo "no-preload-343495" | sudo tee /etc/hostname
	I1212 21:10:37.212905   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-343495
	
	I1212 21:10:37.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.215789   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216147   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.216182   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.216336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.216525   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216704   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.216877   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.217037   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.217425   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.217444   60628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-343495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-343495/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-343495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:10:37.355687   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:10:37.355721   60628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17734-9188/.minikube CaCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17734-9188/.minikube}
	I1212 21:10:37.355754   60628 buildroot.go:174] setting up certificates
	I1212 21:10:37.355767   60628 provision.go:83] configureAuth start
	I1212 21:10:37.355780   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetMachineName
	I1212 21:10:37.356089   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:37.359197   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359644   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.359717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.359937   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.362695   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363043   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.363079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.363251   60628 provision.go:138] copyHostCerts
	I1212 21:10:37.363316   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem, removing ...
	I1212 21:10:37.363336   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem
	I1212 21:10:37.363410   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/key.pem (1675 bytes)
	I1212 21:10:37.363536   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem, removing ...
	I1212 21:10:37.363549   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem
	I1212 21:10:37.363585   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/ca.pem (1082 bytes)
	I1212 21:10:37.363671   60628 exec_runner.go:144] found /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem, removing ...
	I1212 21:10:37.363677   60628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem
	I1212 21:10:37.363703   60628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17734-9188/.minikube/cert.pem (1123 bytes)
	I1212 21:10:37.363757   60628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem org=jenkins.no-preload-343495 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-343495]
	I1212 21:10:37.526121   60628 provision.go:172] copyRemoteCerts
	I1212 21:10:37.526205   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:10:37.526234   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.529079   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529425   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.529492   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.529659   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.529850   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.530009   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.530153   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:37.632384   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:10:37.661242   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 21:10:37.689215   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:10:37.714781   60628 provision.go:86] duration metric: configureAuth took 358.999712ms
	I1212 21:10:37.714819   60628 buildroot.go:189] setting minikube options for container-runtime
	I1212 21:10:37.715040   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:10:37.715144   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:37.718379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.718815   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:37.718844   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:37.719212   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:37.719422   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719625   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:37.719789   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:37.719975   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:37.720484   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:37.720519   60628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:10:38.062630   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:10:38.062660   60628 machine.go:91] provisioned docker machine in 997.934774ms
	I1212 21:10:38.062673   60628 start.go:300] post-start starting for "no-preload-343495" (driver="kvm2")
	I1212 21:10:38.062687   60628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:10:38.062707   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.062999   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:10:38.063033   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.065898   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066299   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.066331   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.066626   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.066878   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.067063   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.067228   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.164612   60628 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:10:38.170132   60628 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 21:10:38.170162   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/addons for local assets ...
	I1212 21:10:38.170244   60628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17734-9188/.minikube/files for local assets ...
	I1212 21:10:38.170351   60628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem -> 164562.pem in /etc/ssl/certs
	I1212 21:10:38.170467   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:10:38.181959   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:38.208734   60628 start.go:303] post-start completed in 146.045424ms
	I1212 21:10:38.208762   60628 fix.go:56] fixHost completed within 24.051421131s
	I1212 21:10:38.208782   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.212118   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212519   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.212551   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.212732   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.212947   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213124   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.213268   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.213436   60628 main.go:141] libmachine: Using SSH client type: native
	I1212 21:10:38.213801   60628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1212 21:10:38.213827   60628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 21:10:38.337185   60628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702415438.279018484
	
	I1212 21:10:38.337225   60628 fix.go:206] guest clock: 1702415438.279018484
	I1212 21:10:38.337239   60628 fix.go:219] Guest: 2023-12-12 21:10:38.279018484 +0000 UTC Remote: 2023-12-12 21:10:38.208766005 +0000 UTC m=+370.324656490 (delta=70.252479ms)
	I1212 21:10:38.337264   60628 fix.go:190] guest clock delta is within tolerance: 70.252479ms
	I1212 21:10:38.337275   60628 start.go:83] releasing machines lock for "no-preload-343495", held for 24.179969571s
	I1212 21:10:38.337305   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.337527   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:38.340658   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341019   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.341053   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.341233   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.341952   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342179   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:10:38.342291   60628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:10:38.342336   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.342388   60628 ssh_runner.go:195] Run: cat /version.json
	I1212 21:10:38.342413   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:10:38.345379   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345419   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345762   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345809   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.345841   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:38.345864   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:38.346049   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346055   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346245   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:10:38.346433   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346438   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:10:38.346597   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.346596   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:10:38.467200   60628 ssh_runner.go:195] Run: systemctl --version
	I1212 21:10:38.475578   60628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:10:38.627838   60628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:10:38.634520   60628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:10:38.634614   60628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:10:38.654823   60628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 21:10:38.654847   60628 start.go:475] detecting cgroup driver to use...
	I1212 21:10:38.654928   60628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:10:38.673550   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:10:38.691252   60628 docker.go:203] disabling cri-docker service (if available) ...
	I1212 21:10:38.691318   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:10:38.707542   60628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:10:38.724686   60628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:10:38.843033   60628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:10:38.973535   60628 docker.go:219] disabling docker service ...
	I1212 21:10:38.973610   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:10:38.987940   60628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:10:39.001346   60628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:10:39.105401   60628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:10:39.209198   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:10:39.222268   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:10:39.243154   60628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 21:10:39.243226   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.253418   60628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:10:39.253497   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.263273   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.274546   60628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:10:39.284359   60628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:10:39.294828   60628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:10:39.304818   60628 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 21:10:39.304894   60628 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 21:10:39.318541   60628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:10:39.328819   60628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:10:39.439285   60628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:10:39.619385   60628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:10:39.619462   60628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:10:39.625279   60628 start.go:543] Will wait 60s for crictl version
	I1212 21:10:39.625358   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:39.630234   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 21:10:39.680505   60628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 21:10:39.680579   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.736272   60628 ssh_runner.go:195] Run: crio --version
	I1212 21:10:39.796111   60628 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 21:10:39.732208   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.732243   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.732258   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.761735   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:10:39.761771   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:10:39.984129   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:39.990620   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:39.990650   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.484444   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.492006   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:10:40.492039   61298 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:10:40.983459   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:10:40.990813   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:10:41.001024   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:10:41.001055   61298 api_server.go:131] duration metric: took 4.518922579s to wait for apiserver health ...
	I1212 21:10:41.001070   61298 cni.go:84] Creating CNI manager for ""
	I1212 21:10:41.001078   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:41.003043   61298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:10:41.004669   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:10:41.084775   61298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:10:41.173688   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:10:41.201100   61298 system_pods.go:59] 9 kube-system pods found
	I1212 21:10:41.201132   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201140   61298 system_pods.go:61] "coredns-5dd5756b68-hc52p" [f8895d1e-3484-4ffe-9d11-f5e4b7617c62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:10:41.201148   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:10:41.201158   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:10:41.201165   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:10:41.201171   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:10:41.201177   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:10:41.201182   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:10:41.201187   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:10:41.201193   61298 system_pods.go:74] duration metric: took 27.476871ms to wait for pod list to return data ...
	I1212 21:10:41.201203   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:10:41.205597   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:10:41.205624   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:10:41.205638   61298 node_conditions.go:105] duration metric: took 4.431218ms to run NodePressure ...
	I1212 21:10:41.205653   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:10:41.516976   61298 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529555   61298 kubeadm.go:787] kubelet initialised
	I1212 21:10:41.529592   61298 kubeadm.go:788] duration metric: took 12.533051ms waiting for restarted kubelet to initialise ...
	I1212 21:10:41.529601   61298 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:41.538991   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.546618   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546645   61298 pod_ready.go:81] duration metric: took 7.620954ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.546658   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.546667   61298 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.556921   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556951   61298 pod_ready.go:81] duration metric: took 10.273719ms waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.556963   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.556972   61298 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.563538   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563570   61298 pod_ready.go:81] duration metric: took 6.584443ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.563586   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.563598   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.578973   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579009   61298 pod_ready.go:81] duration metric: took 15.402148ms waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.579025   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.579046   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:41.978938   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978972   61298 pod_ready.go:81] duration metric: took 399.914995ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:41.978990   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:41.978999   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:38.930743   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:41.429587   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:39.798106   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetIP
	I1212 21:10:39.800962   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801364   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:10:39.801399   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:10:39.801592   60628 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 21:10:39.806328   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:39.821949   60628 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 21:10:39.822014   60628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:10:39.873704   60628 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 21:10:39.873733   60628 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 21:10:39.873820   60628 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.873840   60628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.873859   60628 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:39.874021   60628 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.874062   60628 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 21:10:39.874043   60628 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.873836   60628 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.874359   60628 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:39.875271   60628 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:39.875369   60628 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:39.875379   60628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:39.875390   60628 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 21:10:39.875428   60628 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:39.875284   60628 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:39.875803   60628 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.060906   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.061267   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.063065   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.074673   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 21:10:40.076082   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.080787   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.108962   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.169237   60628 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 21:10:40.169289   60628 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.169363   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.172419   60628 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.251588   60628 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 21:10:40.251638   60628 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.251684   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.264051   60628 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 21:10:40.264146   60628 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.264227   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397546   60628 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 21:10:40.397590   60628 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.397640   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397669   60628 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 21:10:40.397709   60628 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.397774   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397876   60628 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 21:10:40.397978   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 21:10:40.398033   60628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 21:10:40.398064   60628 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.398079   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 21:10:40.398105   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.397976   60628 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.398142   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 21:10:40.398143   60628 ssh_runner.go:195] Run: which crictl
	I1212 21:10:40.418430   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 21:10:40.418500   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 21:10:40.530581   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530693   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 21:10:40.530781   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.530584   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 21:10:40.530918   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:40.544770   60628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:40.544970   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 21:10:40.545108   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:40.567016   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567130   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:40.567196   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.567297   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:40.604461   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 21:10:40.604484   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.604531   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 21:10:40.604488   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604644   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 21:10:40.604590   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:40.612665   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 21:10:40.612741   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612794   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 21:10:40.612800   60628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 21:10:40.612935   60628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:40.615786   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 21:10:42.378453   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378486   61298 pod_ready.go:81] duration metric: took 399.478547ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.378499   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-proxy-47qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.378508   61298 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:42.778834   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778871   61298 pod_ready.go:81] duration metric: took 400.345358ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:42.778887   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:42.778897   61298 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:43.179851   61298 pod_ready.go:97] node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179879   61298 pod_ready.go:81] duration metric: took 400.97377ms waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:10:43.179891   61298 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-171828" hosting pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.179898   61298 pod_ready.go:38] duration metric: took 1.6502873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:43.179913   61298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:10:43.196087   61298 ops.go:34] apiserver oom_adj: -16
	I1212 21:10:43.196114   61298 kubeadm.go:640] restartCluster took 20.701074763s
	I1212 21:10:43.196126   61298 kubeadm.go:406] StartCluster complete in 20.766085453s
	I1212 21:10:43.196146   61298 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.196225   61298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:10:43.198844   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:43.199122   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:10:43.199268   61298 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:10:43.199342   61298 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199363   61298 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199372   61298 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:10:43.199396   61298 config.go:182] Loaded profile config "default-k8s-diff-port-171828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 21:10:43.199456   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199373   61298 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199492   61298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-171828"
	I1212 21:10:43.199389   61298 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-171828"
	I1212 21:10:43.199551   61298 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.199568   61298 addons.go:240] addon metrics-server should already be in state true
	I1212 21:10:43.199637   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.199891   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199915   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.199922   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.199945   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.200148   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.200177   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.218067   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I1212 21:10:43.218679   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1212 21:10:43.218817   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219111   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219234   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1212 21:10:43.219356   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219372   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219590   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.219607   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.219699   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.219807   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220061   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.220258   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.220278   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.220324   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.220436   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.220488   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.220676   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.221418   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.221444   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.224718   61298 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-171828"
	W1212 21:10:43.224742   61298 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:10:43.224769   61298 host.go:66] Checking if "default-k8s-diff-port-171828" exists ...
	I1212 21:10:43.225189   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.225227   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.225431   61298 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-171828" context rescaled to 1 replicas
	I1212 21:10:43.225467   61298 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:10:43.228523   61298 out.go:177] * Verifying Kubernetes components...
	I1212 21:10:43.230002   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:10:43.239165   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1212 21:10:43.239749   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.240357   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.240383   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.240761   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.240937   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.241446   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1212 21:10:43.241951   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.242522   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.242541   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.242864   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.242931   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.244753   61298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:10:43.243219   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.246309   61298 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.246332   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:10:43.246358   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.248809   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.250840   61298 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:10:43.252430   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:10:43.251041   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.250309   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.247068   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I1212 21:10:43.252596   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:10:43.252622   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.252718   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.252745   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.253368   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.253677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.253846   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.254434   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.259686   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.259697   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.259748   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.259844   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.259883   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.259973   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.260149   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.260361   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.260420   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.261546   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:10:43.261594   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:10:43.284357   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I1212 21:10:43.284945   61298 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:10:43.285431   61298 main.go:141] libmachine: Using API Version  1
	I1212 21:10:43.285444   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:10:43.286009   61298 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:10:43.286222   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetState
	I1212 21:10:43.288257   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .DriverName
	I1212 21:10:43.288542   61298 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.288565   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:10:43.288586   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHHostname
	I1212 21:10:43.291842   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.292527   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ee:fd", ip: ""} in network mk-default-k8s-diff-port-171828: {Iface:virbr1 ExpiryTime:2023-12-12 22:10:03 +0000 UTC Type:0 Mac:52:54:00:65:ee:fd Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-171828 Clientid:01:52:54:00:65:ee:fd}
	I1212 21:10:43.292680   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | domain default-k8s-diff-port-171828 has defined IP address 192.168.72.253 and MAC address 52:54:00:65:ee:fd in network mk-default-k8s-diff-port-171828
	I1212 21:10:43.293076   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHPort
	I1212 21:10:43.293350   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHKeyPath
	I1212 21:10:43.293512   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .GetSSHUsername
	I1212 21:10:43.293683   61298 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/default-k8s-diff-port-171828/id_rsa Username:docker}
	I1212 21:10:43.405154   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:10:43.426115   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:10:43.426141   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:10:43.486953   61298 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 21:10:43.486975   61298 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:43.491689   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:10:43.491709   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:10:43.505611   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:10:43.538745   61298 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:43.538785   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:10:43.600598   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:10:44.933368   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.528176624s)
	I1212 21:10:44.933442   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.427784857s)
	I1212 21:10:44.933493   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933511   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933539   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332913009s)
	I1212 21:10:44.933496   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933559   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933566   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933569   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.933926   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.933943   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933944   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.933955   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.933964   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.933974   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934081   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934096   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934118   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934120   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934127   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934132   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934138   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934156   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.934372   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934397   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.934401   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934677   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.934808   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.934845   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.934858   61298 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-171828"
	I1212 21:10:44.937727   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) DBG | Closing plugin on server side
	I1212 21:10:44.937783   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.937806   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.945948   61298 main.go:141] libmachine: Making call to close driver server
	I1212 21:10:44.945966   61298 main.go:141] libmachine: (default-k8s-diff-port-171828) Calling .Close
	I1212 21:10:44.946202   61298 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:10:44.946220   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:10:44.949385   61298 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1212 21:10:43.688668   60948 retry.go:31] will retry after 13.919612963s: kubelet not initialised
	I1212 21:10:44.951009   61298 addons.go:502] enable addons completed in 1.751742212s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1212 21:10:45.583280   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:43.432062   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:45.929995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.305027541s)
	I1212 21:10:43.909740   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 21:10:43.909699   60628 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.296738263s)
	I1212 21:10:43.909764   60628 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:43.909770   60628 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 21:10:43.909810   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 21:10:45.879475   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969630074s)
	I1212 21:10:45.879502   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 21:10:45.879527   60628 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:45.879592   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 21:10:47.584004   61298 node_ready.go:58] node "default-k8s-diff-port-171828" has status "Ready":"False"
	I1212 21:10:50.113807   61298 node_ready.go:49] node "default-k8s-diff-port-171828" has status "Ready":"True"
	I1212 21:10:50.113837   61298 node_ready.go:38] duration metric: took 6.626786171s waiting for node "default-k8s-diff-port-171828" to be "Ready" ...
	I1212 21:10:50.113850   61298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:10:50.128903   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656130   61298 pod_ready.go:92] pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:50.656153   61298 pod_ready.go:81] duration metric: took 527.212389ms waiting for pod "coredns-5dd5756b68-b5jrg" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:50.656161   61298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:47.931716   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.433176   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:50.267864   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.388242252s)
	I1212 21:10:50.267898   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 21:10:50.267931   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:50.267977   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 21:10:52.845895   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.577890173s)
	I1212 21:10:52.845935   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 21:10:52.845969   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.846023   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 21:10:52.677971   61298 pod_ready.go:102] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:53.179154   61298 pod_ready.go:92] pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.179186   61298 pod_ready.go:81] duration metric: took 2.523018353s waiting for pod "coredns-5dd5756b68-hc52p" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.179200   61298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185649   61298 pod_ready.go:92] pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:53.185673   61298 pod_ready.go:81] duration metric: took 6.463925ms waiting for pod "etcd-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:53.185685   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193280   61298 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.193303   61298 pod_ready.go:81] duration metric: took 1.00761061s waiting for pod "kube-apiserver-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.193313   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484196   61298 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.484223   61298 pod_ready.go:81] duration metric: took 290.902142ms waiting for pod "kube-controller-manager-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.484240   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883746   61298 pod_ready.go:92] pod "kube-proxy-47qmb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:54.883773   61298 pod_ready.go:81] duration metric: took 399.524854ms waiting for pod "kube-proxy-47qmb" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:54.883784   61298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283637   61298 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace has status "Ready":"True"
	I1212 21:10:55.283670   61298 pod_ready.go:81] duration metric: took 399.871874ms waiting for pod "kube-scheduler-default-k8s-diff-port-171828" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:55.283684   61298 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	I1212 21:10:52.931372   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.932174   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:54.204367   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.358317317s)
	I1212 21:10:54.204393   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 21:10:54.204425   60628 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:54.204485   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 21:10:56.066774   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.862261726s)
	I1212 21:10:56.066802   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 21:10:56.066825   60628 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:56.066874   60628 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 21:10:57.118959   60628 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.052055479s)
	I1212 21:10:57.118985   60628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17734-9188/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 21:10:57.119009   60628 cache_images.go:123] Successfully loaded all cached images
	I1212 21:10:57.119021   60628 cache_images.go:92] LoadImages completed in 17.245274715s
	I1212 21:10:57.119103   60628 ssh_runner.go:195] Run: crio config
	I1212 21:10:57.180068   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:10:57.180093   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:10:57.180109   60628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 21:10:57.180127   60628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-343495 NodeName:no-preload-343495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:10:57.180250   60628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-343495"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:10:57.180330   60628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-343495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 21:10:57.180382   60628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 21:10:57.191949   60628 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 21:10:57.192034   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:10:57.202921   60628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 21:10:57.219512   60628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 21:10:57.235287   60628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1212 21:10:57.252278   60628 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1212 21:10:57.256511   60628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:10:57.268744   60628 certs.go:56] Setting up /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495 for IP: 192.168.61.176
	I1212 21:10:57.268781   60628 certs.go:190] acquiring lock for shared ca certs: {Name:mk405425af4978270efc24269a39ede0dab3bd91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:10:57.268959   60628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key
	I1212 21:10:57.269032   60628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key
	I1212 21:10:57.269133   60628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/client.key
	I1212 21:10:57.269228   60628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key.492ad1cf
	I1212 21:10:57.269316   60628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key
	I1212 21:10:57.269466   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem (1338 bytes)
	W1212 21:10:57.269511   60628 certs.go:433] ignoring /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456_empty.pem, impossibly tiny 0 bytes
	I1212 21:10:57.269526   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 21:10:57.269562   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:10:57.269597   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:10:57.269629   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/certs/home/jenkins/minikube-integration/17734-9188/.minikube/certs/key.pem (1675 bytes)
	I1212 21:10:57.269685   60628 certs.go:437] found cert: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem (1708 bytes)
	I1212 21:10:57.270311   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 21:10:57.295960   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:10:57.320157   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:10:57.344434   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/no-preload-343495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 21:10:57.368906   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:10:57.391830   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:10:57.415954   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:10:57.441182   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:10:57.465055   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/ssl/certs/164562.pem --> /usr/share/ca-certificates/164562.pem (1708 bytes)
	I1212 21:10:57.489788   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:10:57.513828   60628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17734-9188/.minikube/certs/16456.pem --> /usr/share/ca-certificates/16456.pem (1338 bytes)
	I1212 21:10:57.536138   60628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:10:57.553168   60628 ssh_runner.go:195] Run: openssl version
	I1212 21:10:57.558771   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164562.pem && ln -fs /usr/share/ca-certificates/164562.pem /etc/ssl/certs/164562.pem"
	I1212 21:10:57.570141   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574935   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:06 /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.574990   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164562.pem
	I1212 21:10:57.580985   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164562.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 21:10:57.592528   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 21:10:57.603477   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608448   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.608511   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:10:57.614316   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 21:10:57.625667   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16456.pem && ln -fs /usr/share/ca-certificates/16456.pem /etc/ssl/certs/16456.pem"
	I1212 21:10:57.637284   60628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642258   60628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:06 /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.642323   60628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16456.pem
	I1212 21:10:57.648072   60628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16456.pem /etc/ssl/certs/51391683.0"
	I1212 21:10:57.659762   60628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 21:10:57.664517   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:10:57.670385   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:10:57.676336   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:10:57.682074   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:10:57.688387   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:10:57.694542   60628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:10:57.700400   60628 kubeadm.go:404] StartCluster: {Name:no-preload-343495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-343495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 21:10:57.700520   60628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:10:57.700576   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:10:57.738703   60628 cri.go:89] found id: ""
	I1212 21:10:57.738776   60628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:10:57.749512   60628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 21:10:57.749538   60628 kubeadm.go:636] restartCluster start
	I1212 21:10:57.749610   60628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:10:57.758905   60628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.760000   60628 kubeconfig.go:92] found "no-preload-343495" server: "https://192.168.61.176:8443"
	I1212 21:10:57.762219   60628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:10:57.773107   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.773181   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.785478   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.785500   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:57.785554   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:57.797412   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:57.613799   60948 retry.go:31] will retry after 13.009137494s: kubelet not initialised
	I1212 21:10:57.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.591232   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:02.093666   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:57.429861   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:59.429944   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:01.438267   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:10:58.297630   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.297712   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.312155   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:58.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:58.797652   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:58.809726   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.297677   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.309875   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:10:59.798441   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:10:59.798531   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:10:59.810533   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.298154   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.298237   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.310050   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:00.797585   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:00.797683   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:00.809712   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.298094   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.298224   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.310181   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:01.797635   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:01.797742   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:01.809336   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.297912   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.297997   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.309215   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:02.797666   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:02.797749   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:02.808815   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.590426   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.590850   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.929977   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:06.429697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:03.297975   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.298066   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.308865   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:03.798103   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:03.798207   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:03.809553   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.297580   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.297653   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.309100   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:04.797646   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:04.797767   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:04.809269   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.297574   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.297665   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.309281   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:05.797809   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:05.797898   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:05.809794   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.298381   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.298497   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.309467   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:06.798050   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:06.798132   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:06.809758   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.298354   60628 api_server.go:166] Checking apiserver status ...
	I1212 21:11:07.298434   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:11:07.309655   60628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:11:07.773157   60628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 21:11:07.773216   60628 kubeadm.go:1135] stopping kube-system containers ...
	I1212 21:11:07.773229   60628 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:11:07.773290   60628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:11:07.815986   60628 cri.go:89] found id: ""
	I1212 21:11:07.816068   60628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:11:07.832950   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:11:07.842287   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:11:07.842353   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851694   60628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 21:11:07.851720   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:10.630075   60948 kubeadm.go:787] kubelet initialised
	I1212 21:11:10.630105   60948 kubeadm.go:788] duration metric: took 47.146743334s waiting for restarted kubelet to initialise ...
	I1212 21:11:10.630116   60948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:10.637891   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644674   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.644700   60948 pod_ready.go:81] duration metric: took 6.771094ms waiting for pod "coredns-5644d7b6d9-7nkxh" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.644710   60948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651801   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.651830   60948 pod_ready.go:81] duration metric: took 7.112566ms waiting for pod "coredns-5644d7b6d9-slvnx" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.651845   60948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659678   60948 pod_ready.go:92] pod "etcd-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.659700   60948 pod_ready.go:81] duration metric: took 7.845111ms waiting for pod "etcd-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.659711   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665929   60948 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:10.665958   60948 pod_ready.go:81] duration metric: took 6.237833ms waiting for pod "kube-apiserver-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:10.665972   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028938   60948 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.028961   60948 pod_ready.go:81] duration metric: took 362.981718ms waiting for pod "kube-controller-manager-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.028973   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428824   60948 pod_ready.go:92] pod "kube-proxy-5mvzb" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.428853   60948 pod_ready.go:81] duration metric: took 399.87314ms waiting for pod "kube-proxy-5mvzb" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.428866   60948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828546   60948 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:11.828578   60948 pod_ready.go:81] duration metric: took 399.696769ms waiting for pod "kube-scheduler-old-k8s-version-372099" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:11.828590   60948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:09.094309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:11.098257   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:08.928635   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:10.929896   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:07.988857   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.772924   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:08.980401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.108938   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:09.189716   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:11:09.189780   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.201432   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:09.722085   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.222325   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:10.721931   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.222186   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.721642   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:11:11.745977   60628 api_server.go:72] duration metric: took 2.556260463s to wait for apiserver process to appear ...
	I1212 21:11:11.746005   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:11:11.746025   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:14.135897   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.138482   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:13.590920   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.591230   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:12.931314   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:15.429327   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:16.294367   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.294401   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.294413   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.347744   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:11:16.347780   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:11:16.848435   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:16.853773   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:16.853823   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.348312   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.359543   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.359579   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:17.848425   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:17.853966   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 21:11:17.854006   60628 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 21:11:18.348644   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:11:18.373028   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:11:18.385301   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:11:18.385341   60628 api_server.go:131] duration metric: took 6.639327054s to wait for apiserver health ...
	I1212 21:11:18.385353   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:11:18.385362   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:11:18.387289   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:11:18.636422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:20.636472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.592197   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.593157   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:21.594049   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:17.434254   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:19.930697   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:18.388998   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:11:18.449634   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:11:18.491001   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:11:18.517694   60628 system_pods.go:59] 8 kube-system pods found
	I1212 21:11:18.517729   60628 system_pods.go:61] "coredns-76f75df574-s9jgn" [b13d32b4-a44b-4f79-bece-d0adafef4c7c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:11:18.517740   60628 system_pods.go:61] "etcd-no-preload-343495" [ad48db04-9c79-48e9-a001-1a9061c43cb9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:11:18.517754   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [24d024c1-a89f-4ede-8dbf-7502f0179cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:11:18.517760   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [10ce49e3-2679-4ac5-89aa-9179582ae778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:11:18.517765   60628 system_pods.go:61] "kube-proxy-492l6" [3a2bbe46-0506-490f-aae8-a97e48f3205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:11:18.517773   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [bca80470-c204-4a34-9c7d-5de3ad382c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:11:18.517778   60628 system_pods.go:61] "metrics-server-57f55c9bc5-tmmk4" [11066021-353e-418e-9c7f-78e72dae44a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:11:18.517785   60628 system_pods.go:61] "storage-provisioner" [e681d4cd-f2f6-4cf3-ba09-0f361a64aafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:11:18.517794   60628 system_pods.go:74] duration metric: took 26.756848ms to wait for pod list to return data ...
	I1212 21:11:18.517815   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:11:18.521330   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:11:18.521362   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:11:18.521377   60628 node_conditions.go:105] duration metric: took 3.557177ms to run NodePressure ...
	I1212 21:11:18.521401   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:11:18.945267   60628 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958848   60628 kubeadm.go:787] kubelet initialised
	I1212 21:11:18.958877   60628 kubeadm.go:788] duration metric: took 13.578451ms waiting for restarted kubelet to initialise ...
	I1212 21:11:18.958886   60628 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:11:18.964819   60628 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:20.987111   60628 pod_ready.go:102] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.494268   60628 pod_ready.go:92] pod "coredns-76f75df574-s9jgn" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:22.494299   60628 pod_ready.go:81] duration metric: took 3.529452237s waiting for pod "coredns-76f75df574-s9jgn" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:22.494311   60628 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:23.136140   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:25.635800   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.093215   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.590861   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:22.429921   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.928565   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:26.929668   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:24.514490   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.013783   60628 pod_ready.go:102] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:27.637165   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:30.133948   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.091057   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.598428   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:28.930654   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:31.428436   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:29.514918   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.514945   60628 pod_ready.go:81] duration metric: took 7.020626508s waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.514955   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524669   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.524696   60628 pod_ready.go:81] duration metric: took 9.734059ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.524709   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541808   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.541830   60628 pod_ready.go:81] duration metric: took 17.113672ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.541839   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553955   60628 pod_ready.go:92] pod "kube-proxy-492l6" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.553979   60628 pod_ready.go:81] duration metric: took 12.134143ms waiting for pod "kube-proxy-492l6" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.553988   60628 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562798   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:11:29.562835   60628 pod_ready.go:81] duration metric: took 8.836628ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:29.562850   60628 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	I1212 21:11:31.818614   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:32.134558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.135376   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.634429   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:34.090158   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.091290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.429336   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:35.430448   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:33.819222   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:36.318847   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.637527   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:41.134980   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.115262   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.591502   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:37.929700   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:39.929830   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:38.318911   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:40.319619   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.319750   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.135558   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.635174   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:43.090309   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:45.590529   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:42.434126   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.931810   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:44.818997   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.321699   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.635472   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.636294   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.640471   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.590577   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.590885   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.591122   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:47.429836   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.431518   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:51.928631   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:49.823419   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:52.319752   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.137390   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.634152   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.593196   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.089777   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:53.929750   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:55.932860   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:54.321554   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:56.819877   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.136605   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.092816   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.591682   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:58.429543   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:00.432255   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:11:59.318053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:01.325068   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.137023   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.635397   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.091397   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.094195   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:02.933370   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:05.430020   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:03.819751   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:06.319806   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.137648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.635154   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.591471   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.091503   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:07.430684   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:09.929393   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:08.319984   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:10.821053   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.637206   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.136850   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.590992   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.591391   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.591744   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:12.429299   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:14.429724   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:16.430114   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:13.329939   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:15.820117   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.820519   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:17.199675   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.635179   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.635426   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:19.091628   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:21.091739   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:18.929340   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.929933   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:20.319134   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.819399   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:24.133408   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:26.134293   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:23.093543   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.591828   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:22.930710   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.434148   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:25.319949   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.337078   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.134422   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.137461   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:28.090730   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:30.092555   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:27.928685   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.929200   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.929272   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:29.819461   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:31.819541   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.633893   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.636198   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.636373   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:32.590019   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:34.590953   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.591420   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.929488   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:35.929671   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:33.819661   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:36.322177   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.137315   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.635168   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.097607   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:41.590836   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:37.930820   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:39.930916   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:38.324332   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:40.819395   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.819784   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.640489   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.134648   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:43.590910   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.592083   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:42.429717   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:44.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:46.431053   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:45.320122   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:47.819547   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.135328   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.137213   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.091979   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.093149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:48.929529   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:51.428177   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:50.319560   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.820242   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.635136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.637000   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:52.591430   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.090634   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:53.429307   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:55.429455   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:54.821647   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.319971   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.135608   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.137606   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.634197   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.590565   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:00.091074   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:57.429785   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.928834   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:12:59.818255   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:01.819526   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:03.635008   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.134591   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.591023   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.592260   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.092331   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:02.430411   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.930385   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:04.326885   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:06.822828   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:08.135379   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:10.136957   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.590114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.593478   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:07.434219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.929736   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.930477   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:09.322955   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:11.819793   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:12.137554   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.635349   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.637857   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.092558   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.591772   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.429362   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.931219   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:14.319867   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:16.325224   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.135196   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.634789   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.090842   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.591235   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:19.430522   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:21.929464   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:18.326463   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:20.819839   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:22.820060   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.636879   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.135188   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.591676   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.591833   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:23.929811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:26.429286   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:25.319356   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.819668   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.634130   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.635441   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:27.591961   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.090560   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:32.091429   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:28.929344   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:30.929561   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:29.820548   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:31.820901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.134798   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.635317   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.094290   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.589895   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:33.429811   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:35.429995   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:34.319447   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:36.822690   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.636833   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.136281   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:38.591586   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.090302   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:37.929337   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:40.428532   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:39.321656   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:41.820917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.635037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.135037   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:43.091587   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:45.590322   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:42.429616   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.430483   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.431960   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:44.319403   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:46.326448   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.136136   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:49.635013   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.635308   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:47.592114   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:50.089825   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:52.090721   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.928619   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.429031   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:48.820121   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:51.319794   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.635440   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.134872   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:54.589746   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.590432   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.429817   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:55.929211   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:53.820666   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:56.322986   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.135622   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.139553   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.592602   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:01.091154   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:57.929777   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:59.930300   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:13:58.818901   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:00.819587   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.634488   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.636059   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:03.591886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:06.091886   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:02.432472   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:04.929381   60833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.125384   60833 pod_ready.go:81] duration metric: took 4m0.000960425s waiting for pod "metrics-server-57f55c9bc5-v978l" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:05.125428   60833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:05.125437   60833 pod_ready.go:38] duration metric: took 4m2.799403108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:05.125453   60833 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:05.125518   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:05.125592   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:05.203017   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:05.203045   60833 cri.go:89] found id: ""
	I1212 21:14:05.203054   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:05.203115   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.208622   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:05.208693   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:05.250079   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.250102   60833 cri.go:89] found id: ""
	I1212 21:14:05.250118   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:05.250161   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.254870   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:05.254946   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:05.323718   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:05.323748   60833 cri.go:89] found id: ""
	I1212 21:14:05.323757   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:05.323819   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.328832   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:05.328902   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:05.372224   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.372252   60833 cri.go:89] found id: ""
	I1212 21:14:05.372262   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:05.372316   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.377943   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:05.378007   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:05.417867   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:05.417894   60833 cri.go:89] found id: ""
	I1212 21:14:05.417905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:05.417961   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.422198   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:05.422264   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:05.462031   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:05.462052   60833 cri.go:89] found id: ""
	I1212 21:14:05.462059   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:05.462114   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.466907   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:05.466962   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:05.512557   60833 cri.go:89] found id: ""
	I1212 21:14:05.512585   60833 logs.go:284] 0 containers: []
	W1212 21:14:05.512592   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:05.512597   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:05.512663   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:05.553889   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.553914   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:05.553921   60833 cri.go:89] found id: ""
	I1212 21:14:05.553929   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:05.553982   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.558864   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:05.563550   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:05.563572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:05.627093   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:05.627135   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:05.642800   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:05.642827   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:05.820642   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:05.820683   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:05.871256   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:05.871299   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:05.913399   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:05.913431   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:05.955061   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:05.955103   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:06.012639   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:06.012681   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:06.057933   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:06.057970   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:06.110367   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:06.110400   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:06.173711   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:06.173746   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:06.214291   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:06.214328   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:06.260105   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:06.260142   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:03.320010   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:05.321011   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.821313   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:07.134137   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.635405   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:08.591048   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:10.593286   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:09.219373   60833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:09.237985   60833 api_server.go:72] duration metric: took 4m14.403294004s to wait for apiserver process to appear ...
	I1212 21:14:09.238014   60833 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:09.238057   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:09.238119   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:09.281005   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:09.281028   60833 cri.go:89] found id: ""
	I1212 21:14:09.281037   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:09.281097   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.285354   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:09.285436   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:09.336833   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.336864   60833 cri.go:89] found id: ""
	I1212 21:14:09.336874   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:09.336937   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.342850   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:09.342928   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:09.387107   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.387133   60833 cri.go:89] found id: ""
	I1212 21:14:09.387143   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:09.387202   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.392729   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:09.392806   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:09.433197   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:09.433225   60833 cri.go:89] found id: ""
	I1212 21:14:09.433232   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:09.433281   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.438043   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:09.438092   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:09.486158   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.486185   60833 cri.go:89] found id: ""
	I1212 21:14:09.486200   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:09.486255   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.491667   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:09.491735   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:09.536085   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:09.536108   60833 cri.go:89] found id: ""
	I1212 21:14:09.536114   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:09.536165   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.540939   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:09.541008   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:09.585160   60833 cri.go:89] found id: ""
	I1212 21:14:09.585187   60833 logs.go:284] 0 containers: []
	W1212 21:14:09.585195   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:09.585200   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:09.585254   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:09.628972   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:09.629001   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:09.629008   60833 cri.go:89] found id: ""
	I1212 21:14:09.629017   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:09.629075   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.634242   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:09.639308   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:09.639344   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:09.766299   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:09.766329   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:09.816655   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:09.816699   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:09.863184   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:09.863212   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:09.924345   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:09.924382   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:10.363852   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:10.363897   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:10.417375   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:10.417407   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:10.432758   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:10.432788   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:10.483732   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:10.483778   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:10.538254   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:10.538283   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:10.598142   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:10.598174   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:10.650678   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:10.650710   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:10.697971   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:10.698000   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:10.318636   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.321917   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:12.134600   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:14.134822   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.634845   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.091008   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:15.589901   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:13.241720   60833 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I1212 21:14:13.248465   60833 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I1212 21:14:13.249814   60833 api_server.go:141] control plane version: v1.28.4
	I1212 21:14:13.249839   60833 api_server.go:131] duration metric: took 4.011816395s to wait for apiserver health ...
	I1212 21:14:13.249848   60833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:14:13.249871   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:13.249916   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:13.300138   60833 cri.go:89] found id: "c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:13.300161   60833 cri.go:89] found id: ""
	I1212 21:14:13.300171   60833 logs.go:284] 1 containers: [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2]
	I1212 21:14:13.300228   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.306350   60833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:13.306424   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:13.358644   60833 cri.go:89] found id: "aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.358667   60833 cri.go:89] found id: ""
	I1212 21:14:13.358676   60833 logs.go:284] 1 containers: [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be]
	I1212 21:14:13.358737   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.363921   60833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:13.363989   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:13.413339   60833 cri.go:89] found id: "41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:13.413366   60833 cri.go:89] found id: ""
	I1212 21:14:13.413374   60833 logs.go:284] 1 containers: [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843]
	I1212 21:14:13.413420   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.418188   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:13.418248   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:13.461495   60833 cri.go:89] found id: "6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:13.461522   60833 cri.go:89] found id: ""
	I1212 21:14:13.461532   60833 logs.go:284] 1 containers: [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470]
	I1212 21:14:13.461581   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.465878   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:13.465951   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:13.511866   60833 cri.go:89] found id: "bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.511895   60833 cri.go:89] found id: ""
	I1212 21:14:13.511905   60833 logs.go:284] 1 containers: [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f]
	I1212 21:14:13.511960   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.516312   60833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:13.516381   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:13.560993   60833 cri.go:89] found id: "a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.561023   60833 cri.go:89] found id: ""
	I1212 21:14:13.561034   60833 logs.go:284] 1 containers: [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e]
	I1212 21:14:13.561092   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.565439   60833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:13.565514   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:13.608401   60833 cri.go:89] found id: ""
	I1212 21:14:13.608434   60833 logs.go:284] 0 containers: []
	W1212 21:14:13.608445   60833 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:13.608452   60833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:13.608507   60833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:13.661929   60833 cri.go:89] found id: "1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:13.661956   60833 cri.go:89] found id: "0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:13.661963   60833 cri.go:89] found id: ""
	I1212 21:14:13.661972   60833 logs.go:284] 2 containers: [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653]
	I1212 21:14:13.662036   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.667039   60833 ssh_runner.go:195] Run: which crictl
	I1212 21:14:13.671770   60833 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:13.671791   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:13.793637   60833 logs.go:123] Gathering logs for etcd [aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be] ...
	I1212 21:14:13.793671   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa3b65804db3f5ca96a98c58805b35e142b120be97d69183d7b3f5d7b06a03be"
	I1212 21:14:13.844253   60833 logs.go:123] Gathering logs for kube-proxy [bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f] ...
	I1212 21:14:13.844286   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1393c2dcb2571cdaff9c9e7ef79aa0d2ef05fdcaf2153aff2dddb3c9d3d82f"
	I1212 21:14:13.886965   60833 logs.go:123] Gathering logs for kube-controller-manager [a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e] ...
	I1212 21:14:13.886997   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ada7ed54f936910c5df5ab7fca776896fbb20839e16831ab858b13fb49e48e"
	I1212 21:14:13.946537   60833 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:13.946572   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:13.999732   60833 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:13.999769   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:14.015819   60833 logs.go:123] Gathering logs for kube-scheduler [6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470] ...
	I1212 21:14:14.015849   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a76cf81a377eb1edf4e122f875343831eb7871f00676ab2cd7c7ff35acb4470"
	I1212 21:14:14.063649   60833 logs.go:123] Gathering logs for container status ...
	I1212 21:14:14.063684   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:14.116465   60833 logs.go:123] Gathering logs for kube-apiserver [c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2] ...
	I1212 21:14:14.116499   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8c7037baeaeed16ee9ef762946ea7dddb11abc5bf8e5adfa6b1377421fce9b2"
	I1212 21:14:14.179838   60833 logs.go:123] Gathering logs for coredns [41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843] ...
	I1212 21:14:14.179875   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41483ce2844cd1a2703920d8594823bb58e44c039a0beda3a2648ecab3d66843"
	I1212 21:14:14.224213   60833 logs.go:123] Gathering logs for storage-provisioner [1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9] ...
	I1212 21:14:14.224243   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1703f1d5be8cca2f44b577b23b3c70f64a268b5b0a0436352f3c03a1fa089de9"
	I1212 21:14:14.262832   60833 logs.go:123] Gathering logs for storage-provisioner [0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653] ...
	I1212 21:14:14.262858   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0285b9b54f023610a14a6724e87b815d16b24f2424f9b5ded8f712b4c4689653"
	I1212 21:14:14.307981   60833 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:14.308008   60833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:17.188864   60833 system_pods.go:59] 8 kube-system pods found
	I1212 21:14:17.188919   60833 system_pods.go:61] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.188927   60833 system_pods.go:61] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.188934   60833 system_pods.go:61] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.188943   60833 system_pods.go:61] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.188950   60833 system_pods.go:61] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.188959   60833 system_pods.go:61] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.188980   60833 system_pods.go:61] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.188988   60833 system_pods.go:61] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.188996   60833 system_pods.go:74] duration metric: took 3.939142839s to wait for pod list to return data ...
	I1212 21:14:17.189005   60833 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:14:17.192352   60833 default_sa.go:45] found service account: "default"
	I1212 21:14:17.192390   60833 default_sa.go:55] duration metric: took 3.37914ms for default service account to be created ...
	I1212 21:14:17.192400   60833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:14:17.198396   60833 system_pods.go:86] 8 kube-system pods found
	I1212 21:14:17.198424   60833 system_pods.go:89] "coredns-5dd5756b68-zj5wn" [8f51596e-d7e1-40de-9394-5788ff7fde7b] Running
	I1212 21:14:17.198429   60833 system_pods.go:89] "etcd-embed-certs-831188" [cc3edfe5-b6c1-4c37-9ee8-ab0e47061048] Running
	I1212 21:14:17.198433   60833 system_pods.go:89] "kube-apiserver-embed-certs-831188" [2dbbebde-7d74-44d9-b7e7-12988ca2b6ee] Running
	I1212 21:14:17.198438   60833 system_pods.go:89] "kube-controller-manager-embed-certs-831188" [e41b8256-3e66-4a76-b3f0-4a54bd490f08] Running
	I1212 21:14:17.198442   60833 system_pods.go:89] "kube-proxy-nsv4w" [621a8605-777d-4fab-8884-16de1091e792] Running
	I1212 21:14:17.198446   60833 system_pods.go:89] "kube-scheduler-embed-certs-831188" [4fff3885-a6d3-4c59-bd85-674fd8148e06] Running
	I1212 21:14:17.198455   60833 system_pods.go:89] "metrics-server-57f55c9bc5-v978l" [5870eb0c-b40b-4fc5-bf09-de1ed799993c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:14:17.198459   60833 system_pods.go:89] "storage-provisioner" [a48c6632-0d79-4b43-ad2b-78c090c9b1f8] Running
	I1212 21:14:17.198466   60833 system_pods.go:126] duration metric: took 6.060971ms to wait for k8s-apps to be running ...
	I1212 21:14:17.198473   60833 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:14:17.198513   60833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:14:17.217190   60833 system_svc.go:56] duration metric: took 18.71037ms WaitForService to wait for kubelet.
	I1212 21:14:17.217224   60833 kubeadm.go:581] duration metric: took 4m22.382539055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:14:17.217249   60833 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:14:17.221504   60833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:14:17.221540   60833 node_conditions.go:123] node cpu capacity is 2
	I1212 21:14:17.221555   60833 node_conditions.go:105] duration metric: took 4.300742ms to run NodePressure ...
	I1212 21:14:17.221569   60833 start.go:228] waiting for startup goroutines ...
	I1212 21:14:17.221577   60833 start.go:233] waiting for cluster config update ...
	I1212 21:14:17.221594   60833 start.go:242] writing updated cluster config ...
	I1212 21:14:17.221939   60833 ssh_runner.go:195] Run: rm -f paused
	I1212 21:14:17.277033   60833 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:14:17.279044   60833 out.go:177] * Done! kubectl is now configured to use "embed-certs-831188" cluster and "default" namespace by default
	I1212 21:14:14.818262   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:16.823731   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:18.634990   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.135517   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:17.593149   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:20.091419   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:22.091781   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:19.320462   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:21.819129   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.636400   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.134084   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:24.591552   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:27.090974   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:23.825879   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:26.318691   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.135741   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:30.635812   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:29.091882   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.590150   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:28.819815   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:31.319140   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.134738   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:35.637961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.591929   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.091976   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:33.819872   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:36.325409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.139066   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.635659   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.591006   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:41.090674   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:38.819966   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:40.820281   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.135071   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.635762   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.091695   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.595126   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:43.323343   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:45.819822   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.134846   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.135229   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.092328   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.591470   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:48.319483   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:50.819702   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.135550   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:54.634163   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:56.634961   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:52.593957   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.091338   61298 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.284411   61298 pod_ready.go:81] duration metric: took 4m0.000712263s waiting for pod "metrics-server-57f55c9bc5-fqrqh" in "kube-system" namespace to be "Ready" ...
	E1212 21:14:55.284453   61298 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:14:55.284462   61298 pod_ready.go:38] duration metric: took 4m5.170596318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:14:55.284486   61298 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:14:55.284536   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:55.284595   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:55.345012   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:55.345043   61298 cri.go:89] found id: ""
	I1212 21:14:55.345055   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:55.345118   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.350261   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:55.350329   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:55.403088   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:55.403116   61298 cri.go:89] found id: ""
	I1212 21:14:55.403124   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:55.403169   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.408043   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:55.408103   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:55.449581   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:55.449608   61298 cri.go:89] found id: ""
	I1212 21:14:55.449615   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:55.449670   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.454762   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:55.454828   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:55.502919   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:55.502960   61298 cri.go:89] found id: ""
	I1212 21:14:55.502970   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:55.503050   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.508035   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:55.508101   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:55.550335   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:55.550365   61298 cri.go:89] found id: ""
	I1212 21:14:55.550376   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:55.550437   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.554985   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:55.555043   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:55.599678   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:55.599707   61298 cri.go:89] found id: ""
	I1212 21:14:55.599716   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:55.599772   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.604830   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:55.604913   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:55.651733   61298 cri.go:89] found id: ""
	I1212 21:14:55.651767   61298 logs.go:284] 0 containers: []
	W1212 21:14:55.651774   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:55.651779   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:55.651825   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:55.690691   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:55.690716   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.690723   61298 cri.go:89] found id: ""
	I1212 21:14:55.690732   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:55.690778   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.695227   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:55.699700   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:14:55.699723   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:14:55.751176   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:14:55.751210   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:55.789388   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:14:55.789417   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:14:56.270250   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:56.270296   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:56.315517   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:14:56.315549   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:14:56.377591   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:14:56.377648   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:56.432089   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:56.432124   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:56.496004   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:56.496038   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:56.543979   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:56.544010   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:56.599613   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:14:56.599644   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:56.646113   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:14:56.646146   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:56.693081   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:56.693111   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:56.709557   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:14:56.709591   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:14:53.319742   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:55.320811   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:57.820478   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.134092   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:01.135233   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:14:59.366965   61298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:14:59.385251   61298 api_server.go:72] duration metric: took 4m16.159743319s to wait for apiserver process to appear ...
	I1212 21:14:59.385280   61298 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:14:59.385317   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:14:59.385365   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:14:59.433011   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:14:59.433038   61298 cri.go:89] found id: ""
	I1212 21:14:59.433047   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:14:59.433088   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.438059   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:14:59.438136   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:14:59.477000   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.477078   61298 cri.go:89] found id: ""
	I1212 21:14:59.477093   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:14:59.477153   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.481716   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:14:59.481791   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:14:59.526936   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.526966   61298 cri.go:89] found id: ""
	I1212 21:14:59.526975   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:14:59.527037   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.535907   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:14:59.535985   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:14:59.580818   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:14:59.580848   61298 cri.go:89] found id: ""
	I1212 21:14:59.580856   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:14:59.580916   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.585685   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:14:59.585733   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:14:59.640697   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:14:59.640721   61298 cri.go:89] found id: ""
	I1212 21:14:59.640731   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:14:59.640798   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.644940   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:14:59.645004   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:14:59.687873   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.687901   61298 cri.go:89] found id: ""
	I1212 21:14:59.687910   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:14:59.687960   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.692382   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:14:59.692442   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:14:59.735189   61298 cri.go:89] found id: ""
	I1212 21:14:59.735225   61298 logs.go:284] 0 containers: []
	W1212 21:14:59.735235   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:14:59.735256   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:14:59.735323   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:14:59.778668   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:14:59.778702   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:14:59.778708   61298 cri.go:89] found id: ""
	I1212 21:14:59.778717   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:14:59.778773   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.782827   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:14:59.787277   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:14:59.787303   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:14:59.802470   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:14:59.802499   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:14:59.864191   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:14:59.864225   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:14:59.911007   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:14:59.911032   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:14:59.975894   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:14:59.975932   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:00.021750   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:00.021780   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:00.061527   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:00.061557   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:00.484318   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:00.484359   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:00.549321   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:00.549357   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:00.600589   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:00.600629   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:00.643660   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:00.643690   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:00.698016   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:00.698047   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:00.741819   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:00.741850   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:00.319685   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:02.320017   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.136545   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:05.635632   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:03.383318   61298 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1212 21:15:03.389750   61298 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1212 21:15:03.391084   61298 api_server.go:141] control plane version: v1.28.4
	I1212 21:15:03.391117   61298 api_server.go:131] duration metric: took 4.005829911s to wait for apiserver health ...
	I1212 21:15:03.391155   61298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:03.391181   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:15:03.391262   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:15:03.438733   61298 cri.go:89] found id: "27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:03.438754   61298 cri.go:89] found id: ""
	I1212 21:15:03.438762   61298 logs.go:284] 1 containers: [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487]
	I1212 21:15:03.438809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.443133   61298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:15:03.443203   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:15:03.488960   61298 cri.go:89] found id: "45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.488990   61298 cri.go:89] found id: ""
	I1212 21:15:03.489001   61298 logs.go:284] 1 containers: [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d]
	I1212 21:15:03.489058   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.493741   61298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:15:03.493807   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:15:03.541286   61298 cri.go:89] found id: "d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:03.541316   61298 cri.go:89] found id: ""
	I1212 21:15:03.541325   61298 logs.go:284] 1 containers: [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478]
	I1212 21:15:03.541387   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.545934   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:15:03.546008   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:15:03.585937   61298 cri.go:89] found id: "cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.585962   61298 cri.go:89] found id: ""
	I1212 21:15:03.585971   61298 logs.go:284] 1 containers: [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0]
	I1212 21:15:03.586039   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.590444   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:15:03.590516   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:15:03.626793   61298 cri.go:89] found id: "5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:03.626826   61298 cri.go:89] found id: ""
	I1212 21:15:03.626835   61298 logs.go:284] 1 containers: [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399]
	I1212 21:15:03.626894   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.631829   61298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:15:03.631906   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:15:03.676728   61298 cri.go:89] found id: "b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:03.676750   61298 cri.go:89] found id: ""
	I1212 21:15:03.676758   61298 logs.go:284] 1 containers: [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa]
	I1212 21:15:03.676809   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.681068   61298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:15:03.681123   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:15:03.723403   61298 cri.go:89] found id: ""
	I1212 21:15:03.723430   61298 logs.go:284] 0 containers: []
	W1212 21:15:03.723437   61298 logs.go:286] No container was found matching "kindnet"
	I1212 21:15:03.723442   61298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:15:03.723502   61298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:15:03.772837   61298 cri.go:89] found id: "ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.772868   61298 cri.go:89] found id: "ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.772875   61298 cri.go:89] found id: ""
	I1212 21:15:03.772884   61298 logs.go:284] 2 containers: [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1]
	I1212 21:15:03.772940   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.777274   61298 ssh_runner.go:195] Run: which crictl
	I1212 21:15:03.782354   61298 logs.go:123] Gathering logs for storage-provisioner [ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102] ...
	I1212 21:15:03.782379   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea6928f21cd25b5ae174f7af1617b7c0798aaefb906fc15986847748180b5102"
	I1212 21:15:03.823892   61298 logs.go:123] Gathering logs for storage-provisioner [ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1] ...
	I1212 21:15:03.823919   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0e02bbed658b169411296bc35d35f193ef2638d99b76cd90fabc8679ef12f1"
	I1212 21:15:03.866656   61298 logs.go:123] Gathering logs for etcd [45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d] ...
	I1212 21:15:03.866689   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c49920e407252acc9cc1c6706ad08ef2588077ce184811a9d696912deaed9d"
	I1212 21:15:03.920757   61298 logs.go:123] Gathering logs for kube-scheduler [cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0] ...
	I1212 21:15:03.920798   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9a395f80d15400f47707fd72b626530d6990683450fd38d59bb36e6e082cc0"
	I1212 21:15:03.963737   61298 logs.go:123] Gathering logs for kube-proxy [5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399] ...
	I1212 21:15:03.963766   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c1bc3f3622dae1cf0f61b138800b352a91eb137d1c9cda1cf0034de39182399"
	I1212 21:15:04.005559   61298 logs.go:123] Gathering logs for kube-controller-manager [b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa] ...
	I1212 21:15:04.005582   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4c8c82cfc4cf53e5f5719639b38534a9a789bec6e5ade8ca72fe6d843df04aa"
	I1212 21:15:04.054868   61298 logs.go:123] Gathering logs for container status ...
	I1212 21:15:04.054901   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:15:04.118941   61298 logs.go:123] Gathering logs for kubelet ...
	I1212 21:15:04.118973   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:15:04.188272   61298 logs.go:123] Gathering logs for coredns [d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478] ...
	I1212 21:15:04.188314   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ecf165d7cb697d8e8c59a87595056e64d70ef293ca148747ad98832e8d4478"
	I1212 21:15:04.230473   61298 logs.go:123] Gathering logs for dmesg ...
	I1212 21:15:04.230502   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:15:04.247069   61298 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:15:04.247097   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 21:15:04.425844   61298 logs.go:123] Gathering logs for kube-apiserver [27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487] ...
	I1212 21:15:04.425877   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b89c10d83beb219de90fa9dac59e4fa5f0df22626974152e7afe999062a487"
	I1212 21:15:04.492751   61298 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:15:04.492789   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:15:07.374768   61298 system_pods.go:59] 8 kube-system pods found
	I1212 21:15:07.374796   61298 system_pods.go:61] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.374801   61298 system_pods.go:61] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.374806   61298 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.374810   61298 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.374814   61298 system_pods.go:61] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.374818   61298 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.374823   61298 system_pods.go:61] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.374828   61298 system_pods.go:61] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.374835   61298 system_pods.go:74] duration metric: took 3.983674471s to wait for pod list to return data ...
	I1212 21:15:07.374842   61298 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:07.377370   61298 default_sa.go:45] found service account: "default"
	I1212 21:15:07.377391   61298 default_sa.go:55] duration metric: took 2.542814ms for default service account to be created ...
	I1212 21:15:07.377400   61298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:07.384723   61298 system_pods.go:86] 8 kube-system pods found
	I1212 21:15:07.384751   61298 system_pods.go:89] "coredns-5dd5756b68-b5jrg" [1089e305-a4ce-43d3-83cb-f754858297b3] Running
	I1212 21:15:07.384758   61298 system_pods.go:89] "etcd-default-k8s-diff-port-171828" [e15b3043-e9d5-4cfb-ad17-6ffa3884223b] Running
	I1212 21:15:07.384767   61298 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-171828" [112bd66e-b790-4d36-9fd5-43b4f1ae898d] Running
	I1212 21:15:07.384776   61298 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-171828" [5ba89dec-244a-4a3f-9e0f-4b52d6d1ab45] Running
	I1212 21:15:07.384782   61298 system_pods.go:89] "kube-proxy-47qmb" [93908813-508a-4c97-a20d-5d59a3e6befb] Running
	I1212 21:15:07.384789   61298 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-171828" [ce8f3bb3-7963-4495-835a-463a3899cfc1] Running
	I1212 21:15:07.384800   61298 system_pods.go:89] "metrics-server-57f55c9bc5-fqrqh" [633d3468-a8df-4c9b-9bab-8c26ce998832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:07.384809   61298 system_pods.go:89] "storage-provisioner" [c3a7c100-e7b7-4179-b821-d191741a66fb] Running
	I1212 21:15:07.384824   61298 system_pods.go:126] duration metric: took 7.416446ms to wait for k8s-apps to be running ...
	I1212 21:15:07.384838   61298 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:15:07.384893   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:07.402316   61298 system_svc.go:56] duration metric: took 17.466449ms WaitForService to wait for kubelet.
	I1212 21:15:07.402350   61298 kubeadm.go:581] duration metric: took 4m24.176848962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:15:07.402367   61298 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:15:07.405566   61298 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:15:07.405598   61298 node_conditions.go:123] node cpu capacity is 2
	I1212 21:15:07.405616   61298 node_conditions.go:105] duration metric: took 3.244651ms to run NodePressure ...
	I1212 21:15:07.405628   61298 start.go:228] waiting for startup goroutines ...
	I1212 21:15:07.405637   61298 start.go:233] waiting for cluster config update ...
	I1212 21:15:07.405649   61298 start.go:242] writing updated cluster config ...
	I1212 21:15:07.405956   61298 ssh_runner.go:195] Run: rm -f paused
	I1212 21:15:07.457339   61298 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 21:15:07.459492   61298 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-171828" cluster and "default" namespace by default
	I1212 21:15:04.820409   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:07.323802   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:08.135943   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:10.633863   60948 pod_ready.go:102] pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:11.829177   60948 pod_ready.go:81] duration metric: took 4m0.000566874s waiting for pod "metrics-server-74d5856cc6-7gcw4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:11.829231   60948 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:11.829268   60948 pod_ready.go:38] duration metric: took 4m1.1991406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:11.829314   60948 kubeadm.go:640] restartCluster took 5m11.909727716s
	W1212 21:15:11.829387   60948 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:11.829425   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:09.824487   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:12.319761   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:14.818898   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:16.822843   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:18.398899   60948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.569443116s)
	I1212 21:15:18.398988   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:18.421423   60948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:18.437661   60948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:18.459692   60948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:18.459747   60948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 21:15:18.529408   60948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1212 21:15:18.529485   60948 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:18.690865   60948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:18.691034   60948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:18.691165   60948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:18.939806   60948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:18.939966   60948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:18.949943   60948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1212 21:15:19.070931   60948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:19.072676   60948 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:19.072783   60948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:19.072868   60948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:19.072976   60948 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:19.073053   60948 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:19.073145   60948 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:19.073253   60948 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:19.073367   60948 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:19.073461   60948 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:19.073562   60948 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:19.073669   60948 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:19.073732   60948 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:19.073833   60948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:19.136565   60948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:19.614416   60948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:19.754535   60948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:20.149412   60948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:20.150707   60948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:20.152444   60948 out.go:204]   - Booting up control plane ...
	I1212 21:15:20.152579   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:20.158445   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:20.162012   60948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:20.162125   60948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:20.163852   60948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:19.321950   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:21.334725   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:23.820711   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:26.320918   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.174689   60948 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.007313 seconds
	I1212 21:15:29.174814   60948 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:29.189641   60948 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:29.715080   60948 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:29.715312   60948 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-372099 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 21:15:30.225103   60948 kubeadm.go:322] [bootstrap-token] Using token: h843b5.c34afz2u52stqeoc
	I1212 21:15:30.226707   60948 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:30.226873   60948 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:30.237412   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:30.245755   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:30.252764   60948 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:30.259184   60948 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:30.405726   60948 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:30.647756   60948 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:30.647812   60948 kubeadm.go:322] 
	I1212 21:15:30.647908   60948 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:30.647920   60948 kubeadm.go:322] 
	I1212 21:15:30.648030   60948 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:30.648040   60948 kubeadm.go:322] 
	I1212 21:15:30.648076   60948 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:30.648155   60948 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:30.648219   60948 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:30.648229   60948 kubeadm.go:322] 
	I1212 21:15:30.648358   60948 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:30.648477   60948 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:30.648571   60948 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:30.648582   60948 kubeadm.go:322] 
	I1212 21:15:30.648698   60948 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1212 21:15:30.648813   60948 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:30.648824   60948 kubeadm.go:322] 
	I1212 21:15:30.648920   60948 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649052   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:30.649101   60948 kubeadm.go:322]     --control-plane 	  
	I1212 21:15:30.649111   60948 kubeadm.go:322] 
	I1212 21:15:30.649205   60948 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:30.649214   60948 kubeadm.go:322] 
	I1212 21:15:30.649313   60948 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h843b5.c34afz2u52stqeoc \
	I1212 21:15:30.649435   60948 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:30.649933   60948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:30.649961   60948 cni.go:84] Creating CNI manager for ""
	I1212 21:15:30.649971   60948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:30.651531   60948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:30.652689   60948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:30.663574   60948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:30.686618   60948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:30.686690   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.686692   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=old-k8s-version-372099 minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:30.707974   60948 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:30.909886   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.037212   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:31.641453   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:28.819896   60628 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:29.562965   60628 pod_ready.go:81] duration metric: took 4m0.000097626s waiting for pod "metrics-server-57f55c9bc5-tmmk4" in "kube-system" namespace to be "Ready" ...
	E1212 21:15:29.563010   60628 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 21:15:29.563041   60628 pod_ready.go:38] duration metric: took 4m10.604144973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:29.563066   60628 kubeadm.go:640] restartCluster took 4m31.813522594s
	W1212 21:15:29.563127   60628 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 21:15:29.563156   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:15:32.141066   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:32.640787   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.140569   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:33.640785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.140535   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:34.641063   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.140492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:35.640819   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.140748   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:36.640647   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.141492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:37.641109   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.140524   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:38.641401   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.141549   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:39.641304   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.141537   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:40.641149   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.141391   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:41.640949   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.000355   60628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.437170953s)
	I1212 21:15:44.000430   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:44.014718   60628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:15:44.025263   60628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:15:44.035086   60628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:15:44.035133   60628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 21:15:44.089390   60628 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 21:15:44.089499   60628 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 21:15:44.275319   60628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:15:44.275496   60628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:15:44.275594   60628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 21:15:44.529521   60628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:15:42.141256   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:42.640563   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.140785   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:43.640773   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.141155   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:44.641415   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.140534   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:45.641492   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.141203   60948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:46.259301   60948 kubeadm.go:1088] duration metric: took 15.572687129s to wait for elevateKubeSystemPrivileges.
	I1212 21:15:46.259339   60948 kubeadm.go:406] StartCluster complete in 5m46.398198596s
	I1212 21:15:46.259364   60948 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.259455   60948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:15:46.261128   60948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:15:46.261410   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:15:46.261582   60948 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:15:46.261654   60948 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261676   60948 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-372099"
	W1212 21:15:46.261691   60948 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:15:46.261690   60948 config.go:182] Loaded profile config "old-k8s-version-372099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 21:15:46.261729   60948 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-372099"
	I1212 21:15:46.261739   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.261745   60948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-372099"
	I1212 21:15:46.262128   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262150   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262176   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262204   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.262371   60948 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-372099"
	I1212 21:15:46.262388   60948 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-372099"
	W1212 21:15:46.262396   60948 addons.go:240] addon metrics-server should already be in state true
	I1212 21:15:46.262431   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.262755   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.262775   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.280829   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 21:15:46.281025   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1212 21:15:46.281167   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I1212 21:15:46.281451   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.281529   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.282027   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282043   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282307   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282340   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282381   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282455   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.282466   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.282563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.282760   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.282816   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.283348   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283365   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.283377   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.283388   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.286570   60948 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-372099"
	W1212 21:15:46.286591   60948 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:15:46.286618   60948 host.go:66] Checking if "old-k8s-version-372099" exists ...
	I1212 21:15:46.287021   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.287041   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.300740   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1212 21:15:46.301674   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.301993   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I1212 21:15:46.302303   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.302317   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.302667   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.302772   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.302940   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.303112   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.303127   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.303537   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.304537   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.306285   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.308411   60948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:15:46.307398   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1212 21:15:46.307432   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.310694   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:15:46.310717   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:15:46.310737   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.311358   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.312839   60948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:15:44.530987   60628 out.go:204]   - Generating certificates and keys ...
	I1212 21:15:44.531136   60628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 21:15:44.531267   60628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 21:15:44.531359   60628 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:15:44.531879   60628 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:15:44.532386   60628 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:15:44.533944   60628 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 21:15:44.535037   60628 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:15:44.536175   60628 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:15:44.537226   60628 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:15:44.537964   60628 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:15:44.538451   60628 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 21:15:44.538551   60628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:15:44.841462   60628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:15:45.059424   60628 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:15:45.613097   60628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:15:46.221274   60628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:15:46.372266   60628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:15:46.373199   60628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:15:46.376094   60628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:15:46.311872   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.314010   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.314158   60948 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.314170   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:15:46.314187   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.314387   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.314450   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.314958   60948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:15:46.314985   60948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:15:46.315221   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.315264   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.315563   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.315745   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.315925   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.316191   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.322472   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324106   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.324142   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.324390   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.324651   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.324861   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.325008   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	I1212 21:15:46.339982   60948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I1212 21:15:46.340365   60948 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:15:46.340889   60948 main.go:141] libmachine: Using API Version  1
	I1212 21:15:46.340915   60948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:15:46.341242   60948 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:15:46.341434   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetState
	I1212 21:15:46.343069   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .DriverName
	I1212 21:15:46.343366   60948 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.343384   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:15:46.343402   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHHostname
	I1212 21:15:46.346212   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346596   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:fa:ae", ip: ""} in network mk-old-k8s-version-372099: {Iface:virbr4 ExpiryTime:2023-12-12 22:09:39 +0000 UTC Type:0 Mac:52:54:00:d3:fa:ae Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:old-k8s-version-372099 Clientid:01:52:54:00:d3:fa:ae}
	I1212 21:15:46.346626   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | domain old-k8s-version-372099 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:fa:ae in network mk-old-k8s-version-372099
	I1212 21:15:46.346882   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHPort
	I1212 21:15:46.347322   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHKeyPath
	I1212 21:15:46.347482   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .GetSSHUsername
	I1212 21:15:46.347618   60948 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/old-k8s-version-372099/id_rsa Username:docker}
	W1212 21:15:46.380698   60948 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-372099" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:15:46.380724   60948 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1212 21:15:46.380745   60948 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:15:46.383175   60948 out.go:177] * Verifying Kubernetes components...
	I1212 21:15:46.384789   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:15:46.518292   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:15:46.518316   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:15:46.519393   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:15:46.554663   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:15:46.580810   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:15:46.580839   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:15:46.614409   60948 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.614501   60948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:15:46.628267   60948 node_ready.go:49] node "old-k8s-version-372099" has status "Ready":"True"
	I1212 21:15:46.628302   60948 node_ready.go:38] duration metric: took 13.858882ms waiting for node "old-k8s-version-372099" to be "Ready" ...
	I1212 21:15:46.628318   60948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:46.651927   60948 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:46.651957   60948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:15:46.655191   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:46.734455   60948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:15:47.462832   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462859   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.462837   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.462930   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465016   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465028   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465047   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465057   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465066   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465018   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465027   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465126   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465143   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.465155   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.465440   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465459   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.465460   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465477   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.465462   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.465509   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.509931   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.509955   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.510242   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.510268   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.510289   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.529296   60948 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 21:15:47.740624   60948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006125978s)
	I1212 21:15:47.740686   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.740704   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741036   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.741066   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741082   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741104   60948 main.go:141] libmachine: Making call to close driver server
	I1212 21:15:47.741117   60948 main.go:141] libmachine: (old-k8s-version-372099) Calling .Close
	I1212 21:15:47.741344   60948 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:15:47.741370   60948 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:15:47.741380   60948 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-372099"
	I1212 21:15:47.741382   60948 main.go:141] libmachine: (old-k8s-version-372099) DBG | Closing plugin on server side
	I1212 21:15:47.743094   60948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:15:46.377620   60628 out.go:204]   - Booting up control plane ...
	I1212 21:15:46.377753   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:15:46.380316   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:15:46.381669   60628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:15:46.400406   60628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:15:46.401911   60628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:15:46.402016   60628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 21:15:46.577916   60628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 21:15:47.744911   60948 addons.go:502] enable addons completed in 1.483323446s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:15:48.879924   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:51.240011   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.081961   60628 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503798 seconds
	I1212 21:15:55.108753   60628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 21:15:55.132442   60628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 21:15:55.675426   60628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 21:15:55.675616   60628 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-343495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 21:15:56.197198   60628 kubeadm.go:322] [bootstrap-token] Using token: 6e6rca.dj99vsq9tzjoif3m
	I1212 21:15:56.198596   60628 out.go:204]   - Configuring RBAC rules ...
	I1212 21:15:56.198756   60628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 21:15:56.204758   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 21:15:56.217506   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 21:15:56.221482   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 21:15:56.225791   60628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 21:15:56.231024   60628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 21:15:56.249696   60628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 21:15:56.516070   60628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 21:15:56.613203   60628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 21:15:56.613227   60628 kubeadm.go:322] 
	I1212 21:15:56.613315   60628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 21:15:56.613340   60628 kubeadm.go:322] 
	I1212 21:15:56.613432   60628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 21:15:56.613447   60628 kubeadm.go:322] 
	I1212 21:15:56.613501   60628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 21:15:56.613588   60628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 21:15:56.613671   60628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 21:15:56.613682   60628 kubeadm.go:322] 
	I1212 21:15:56.613755   60628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 21:15:56.613762   60628 kubeadm.go:322] 
	I1212 21:15:56.613822   60628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 21:15:56.613832   60628 kubeadm.go:322] 
	I1212 21:15:56.613903   60628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 21:15:56.614004   60628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 21:15:56.614104   60628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 21:15:56.614116   60628 kubeadm.go:322] 
	I1212 21:15:56.614244   60628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 21:15:56.614369   60628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 21:15:56.614388   60628 kubeadm.go:322] 
	I1212 21:15:56.614507   60628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614653   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 \
	I1212 21:15:56.614682   60628 kubeadm.go:322] 	--control-plane 
	I1212 21:15:56.614689   60628 kubeadm.go:322] 
	I1212 21:15:56.614787   60628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 21:15:56.614797   60628 kubeadm.go:322] 
	I1212 21:15:56.614865   60628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6e6rca.dj99vsq9tzjoif3m \
	I1212 21:15:56.614993   60628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e516685f3dc0064a5c5c0e2ae77dd4a2c0b19f763f2d288eec8f9124f8c3d5b5 
	I1212 21:15:56.616155   60628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:15:56.616184   60628 cni.go:84] Creating CNI manager for ""
	I1212 21:15:56.616197   60628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 21:15:56.618787   60628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 21:15:53.240376   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:55.738865   60948 pod_ready.go:102] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"False"
	I1212 21:15:56.620193   60628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 21:15:56.653642   60628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 21:15:56.701431   60628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 21:15:56.701520   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.701521   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=no-preload-343495 minikube.k8s.io/updated_at=2023_12_12T21_15_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:56.765645   60628 ops.go:34] apiserver oom_adj: -16
	I1212 21:15:57.021925   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.162627   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.772366   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:57.239852   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.239881   60948 pod_ready.go:81] duration metric: took 10.584655594s waiting for pod "coredns-5644d7b6d9-bd52f" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.239895   60948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245919   60948 pod_ready.go:92] pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.245943   60948 pod_ready.go:81] duration metric: took 6.039649ms waiting for pod "coredns-5644d7b6d9-cn5ch" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.245955   60948 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251905   60948 pod_ready.go:92] pod "kube-proxy-vzqkz" in "kube-system" namespace has status "Ready":"True"
	I1212 21:15:57.251933   60948 pod_ready.go:81] duration metric: took 5.969732ms waiting for pod "kube-proxy-vzqkz" in "kube-system" namespace to be "Ready" ...
	I1212 21:15:57.251943   60948 pod_ready.go:38] duration metric: took 10.623613273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:15:57.251963   60948 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:15:57.252021   60948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:15:57.271808   60948 api_server.go:72] duration metric: took 10.891018678s to wait for apiserver process to appear ...
	I1212 21:15:57.271834   60948 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:15:57.271853   60948 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 21:15:57.279544   60948 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 21:15:57.280373   60948 api_server.go:141] control plane version: v1.16.0
	I1212 21:15:57.280393   60948 api_server.go:131] duration metric: took 8.55283ms to wait for apiserver health ...
	I1212 21:15:57.280401   60948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:15:57.284489   60948 system_pods.go:59] 5 kube-system pods found
	I1212 21:15:57.284516   60948 system_pods.go:61] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.284520   60948 system_pods.go:61] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.284525   60948 system_pods.go:61] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.284531   60948 system_pods.go:61] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.284535   60948 system_pods.go:61] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.284542   60948 system_pods.go:74] duration metric: took 4.136571ms to wait for pod list to return data ...
	I1212 21:15:57.284549   60948 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:15:57.288616   60948 default_sa.go:45] found service account: "default"
	I1212 21:15:57.288643   60948 default_sa.go:55] duration metric: took 4.087698ms for default service account to be created ...
	I1212 21:15:57.288653   60948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:15:57.292785   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.292807   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.292812   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.292816   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.292822   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.292827   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.292842   60948 retry.go:31] will retry after 207.544988ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.505885   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.505911   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.505917   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.505921   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.505928   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.505932   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.505949   60948 retry.go:31] will retry after 367.076908ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:57.878466   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:57.878501   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:57.878509   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:57.878514   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:57.878520   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:57.878527   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:57.878547   60948 retry.go:31] will retry after 381.308829ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.264211   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.264237   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.264243   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.264247   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.264256   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.264262   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.264290   60948 retry.go:31] will retry after 366.461937ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.638206   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:58.638229   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:58.638234   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:58.638238   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:58.638245   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:58.638249   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:58.638276   60948 retry.go:31] will retry after 512.413163ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.156233   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.156263   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.156268   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.156272   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.156279   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.156284   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.156301   60948 retry.go:31] will retry after 775.973999ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:59.937928   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:15:59.937958   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:15:59.937966   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:15:59.937973   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:15:59.937983   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:15:59.937990   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:15:59.938009   60948 retry.go:31] will retry after 831.74396ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:00.775403   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:00.775427   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:00.775432   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:00.775436   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:00.775442   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:00.775447   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:00.775461   60948 retry.go:31] will retry after 1.069326929s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:01.849879   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:01.849906   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:01.849911   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:01.849915   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:01.849922   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:01.849927   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:01.849944   60948 retry.go:31] will retry after 1.540430535s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:15:58.271568   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:58.772443   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.271781   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:15:59.771732   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.272235   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:00.771891   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.271870   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:01.772445   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.271997   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:02.772496   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.395395   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:03.395421   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:03.395427   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:03.395431   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:03.395437   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:03.395442   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:03.395458   60948 retry.go:31] will retry after 2.25868002s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:05.661953   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:05.661988   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:05.661997   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:05.662005   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:05.662016   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:05.662026   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:05.662047   60948 retry.go:31] will retry after 2.893719866s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:03.272067   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:03.771992   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.272187   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:04.772518   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.272480   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:05.772460   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.272463   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:06.772291   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.271662   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:07.772063   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.272491   60628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 21:16:08.414409   60628 kubeadm.go:1088] duration metric: took 11.712956328s to wait for elevateKubeSystemPrivileges.
	I1212 21:16:08.414452   60628 kubeadm.go:406] StartCluster complete in 5m10.714058162s
	I1212 21:16:08.414480   60628 settings.go:142] acquiring lock: {Name:mk49eeedac8900ca1b2ef328689641c0e324e806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.414582   60628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 21:16:08.417772   60628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17734-9188/kubeconfig: {Name:mka9dccdaf910363af1b402baad3291332866a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:16:08.418132   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 21:16:08.418167   60628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 21:16:08.418267   60628 addons.go:69] Setting storage-provisioner=true in profile "no-preload-343495"
	I1212 21:16:08.418281   60628 addons.go:69] Setting default-storageclass=true in profile "no-preload-343495"
	I1212 21:16:08.418289   60628 addons.go:231] Setting addon storage-provisioner=true in "no-preload-343495"
	W1212 21:16:08.418297   60628 addons.go:240] addon storage-provisioner should already be in state true
	I1212 21:16:08.418301   60628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-343495"
	I1212 21:16:08.418310   60628 addons.go:69] Setting metrics-server=true in profile "no-preload-343495"
	I1212 21:16:08.418344   60628 addons.go:231] Setting addon metrics-server=true in "no-preload-343495"
	I1212 21:16:08.418349   60628 host.go:66] Checking if "no-preload-343495" exists ...
	W1212 21:16:08.418353   60628 addons.go:240] addon metrics-server should already be in state true
	I1212 21:16:08.418367   60628 config.go:182] Loaded profile config "no-preload-343495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 21:16:08.418401   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418776   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418810   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.418738   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.418850   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.437816   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I1212 21:16:08.438320   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.438921   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.438945   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.439225   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I1212 21:16:08.439418   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.439740   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.439809   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1212 21:16:08.440064   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.440092   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.440471   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.440491   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.440499   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.440887   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.440978   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.441002   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.441399   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.441442   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.441724   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.441960   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.446221   60628 addons.go:231] Setting addon default-storageclass=true in "no-preload-343495"
	W1212 21:16:08.446247   60628 addons.go:240] addon default-storageclass should already be in state true
	I1212 21:16:08.446276   60628 host.go:66] Checking if "no-preload-343495" exists ...
	I1212 21:16:08.446655   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.446690   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.456479   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1212 21:16:08.456883   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.457330   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.457343   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.457784   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.457958   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.459741   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.461624   60628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 21:16:08.462951   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 21:16:08.462963   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 21:16:08.462978   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.462595   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1212 21:16:08.463831   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.464424   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.464443   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.465295   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.465627   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.467919   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468652   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.468681   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.468905   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.469083   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.469197   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.469296   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.472614   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.474536   60628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 21:16:08.475957   60628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.475976   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 21:16:08.475995   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.476821   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I1212 21:16:08.477241   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.477772   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.477796   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.478322   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.479408   60628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 21:16:08.479457   60628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 21:16:08.479725   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480262   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.480285   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.480565   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.480760   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.480909   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.481087   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.496182   60628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I1212 21:16:08.496703   60628 main.go:141] libmachine: () Calling .GetVersion
	I1212 21:16:08.497250   60628 main.go:141] libmachine: Using API Version  1
	I1212 21:16:08.497275   60628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 21:16:08.497705   60628 main.go:141] libmachine: () Calling .GetMachineName
	I1212 21:16:08.497959   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetState
	I1212 21:16:08.499696   60628 main.go:141] libmachine: (no-preload-343495) Calling .DriverName
	I1212 21:16:08.500049   60628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.500071   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 21:16:08.500098   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHHostname
	I1212 21:16:08.503216   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503689   60628 main.go:141] libmachine: (no-preload-343495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:91:03", ip: ""} in network mk-no-preload-343495: {Iface:virbr3 ExpiryTime:2023-12-12 22:10:27 +0000 UTC Type:0 Mac:52:54:00:60:91:03 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-343495 Clientid:01:52:54:00:60:91:03}
	I1212 21:16:08.503717   60628 main.go:141] libmachine: (no-preload-343495) DBG | domain no-preload-343495 has defined IP address 192.168.61.176 and MAC address 52:54:00:60:91:03 in network mk-no-preload-343495
	I1212 21:16:08.503979   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHPort
	I1212 21:16:08.504187   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHKeyPath
	I1212 21:16:08.504348   60628 main.go:141] libmachine: (no-preload-343495) Calling .GetSSHUsername
	I1212 21:16:08.504521   60628 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/no-preload-343495/id_rsa Username:docker}
	I1212 21:16:08.519292   60628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-343495" context rescaled to 1 replicas
	I1212 21:16:08.519324   60628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:16:08.521243   60628 out.go:177] * Verifying Kubernetes components...
	I1212 21:16:08.522602   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:08.637693   60628 node_ready.go:35] waiting up to 6m0s for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.638072   60628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 21:16:08.640594   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 21:16:08.640620   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 21:16:08.645008   60628 node_ready.go:49] node "no-preload-343495" has status "Ready":"True"
	I1212 21:16:08.645041   60628 node_ready.go:38] duration metric: took 7.313798ms waiting for node "no-preload-343495" to be "Ready" ...
	I1212 21:16:08.645056   60628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.650650   60628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658528   60628 pod_ready.go:92] pod "etcd-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.658556   60628 pod_ready.go:81] duration metric: took 7.881265ms waiting for pod "etcd-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.658569   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682938   60628 pod_ready.go:92] pod "kube-apiserver-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.682962   60628 pod_ready.go:81] duration metric: took 24.384424ms waiting for pod "kube-apiserver-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.682975   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.683220   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 21:16:08.688105   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 21:16:08.688131   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 21:16:08.695007   60628 pod_ready.go:92] pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.695034   60628 pod_ready.go:81] duration metric: took 12.050101ms waiting for pod "kube-controller-manager-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.695046   60628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701206   60628 pod_ready.go:92] pod "kube-scheduler-no-preload-343495" in "kube-system" namespace has status "Ready":"True"
	I1212 21:16:08.701230   60628 pod_ready.go:81] duration metric: took 6.174333ms waiting for pod "kube-scheduler-no-preload-343495" in "kube-system" namespace to be "Ready" ...
	I1212 21:16:08.701240   60628 pod_ready.go:38] duration metric: took 56.165354ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 21:16:08.701262   60628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:16:08.701321   60628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:16:08.744650   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 21:16:08.758415   60628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:08.758444   60628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 21:16:08.841030   60628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 21:16:09.387385   60628 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 21:16:10.224475   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541186317s)
	I1212 21:16:10.224515   60628 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.523170366s)
	I1212 21:16:10.224548   60628 api_server.go:72] duration metric: took 1.705201863s to wait for apiserver process to appear ...
	I1212 21:16:10.224561   60628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:16:10.224571   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.479890747s)
	I1212 21:16:10.224606   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224579   60628 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1212 21:16:10.224621   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.224522   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.224686   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225001   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225050   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225065   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225074   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225011   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225019   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225020   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225115   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225130   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.225140   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.225347   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225358   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.225507   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.225572   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.225600   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.233359   60628 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1212 21:16:10.237567   60628 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 21:16:10.237593   60628 api_server.go:131] duration metric: took 13.024501ms to wait for apiserver health ...
	I1212 21:16:10.237602   60628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:16:10.268851   60628 system_pods.go:59] 9 kube-system pods found
	I1212 21:16:10.268891   60628 system_pods.go:61] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268903   60628 system_pods.go:61] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.268912   60628 system_pods.go:61] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.268920   60628 system_pods.go:61] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.268927   60628 system_pods.go:61] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.268936   60628 system_pods.go:61] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.268943   60628 system_pods.go:61] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.268953   60628 system_pods.go:61] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.268963   60628 system_pods.go:61] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending
	I1212 21:16:10.268971   60628 system_pods.go:74] duration metric: took 31.361836ms to wait for pod list to return data ...
	I1212 21:16:10.268987   60628 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:16:10.270947   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.270971   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.271270   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.271290   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.280134   60628 default_sa.go:45] found service account: "default"
	I1212 21:16:10.280159   60628 default_sa.go:55] duration metric: took 11.163534ms for default service account to be created ...
	I1212 21:16:10.280169   60628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:16:10.314822   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.314864   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314873   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.314879   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.314886   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.314893   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.314903   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.314912   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.314923   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.314937   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.314957   60628 retry.go:31] will retry after 284.074155ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.328798   60628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.487713481s)
	I1212 21:16:10.328851   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.328866   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329251   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329276   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329276   60628 main.go:141] libmachine: (no-preload-343495) DBG | Closing plugin on server side
	I1212 21:16:10.329291   60628 main.go:141] libmachine: Making call to close driver server
	I1212 21:16:10.329304   60628 main.go:141] libmachine: (no-preload-343495) Calling .Close
	I1212 21:16:10.329540   60628 main.go:141] libmachine: Successfully made call to close driver server
	I1212 21:16:10.329556   60628 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 21:16:10.329566   60628 addons.go:467] Verifying addon metrics-server=true in "no-preload-343495"
	I1212 21:16:10.332474   60628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 21:16:08.563361   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:08.563393   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:08.563401   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:08.563408   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:08.563420   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:08.563427   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:08.563449   60948 retry.go:31] will retry after 2.871673075s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:11.441932   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:11.441970   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:11.441977   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:11.441983   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:11.441993   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.442003   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:11.442022   60948 retry.go:31] will retry after 3.977150615s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:10.333924   60628 addons.go:502] enable addons completed in 1.915760025s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 21:16:10.616684   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.616724   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616739   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.616748   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.616757   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.616764   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.616775   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.616785   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.616795   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.616807   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.616825   60628 retry.go:31] will retry after 291.662068ms: missing components: kube-dns, kube-proxy
	I1212 21:16:10.919064   60628 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:10.919104   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919114   60628 system_pods.go:89] "coredns-76f75df574-dp6zq" [6bea2e07-9081-4b87-94c3-775f6490f6ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:10.919125   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:10.919135   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:10.919142   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:10.919152   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:10.919160   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:10.919211   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:10.919229   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:10.919259   60628 retry.go:31] will retry after 381.992278ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.312083   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.312115   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.312121   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.312128   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.312137   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.312146   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.312152   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.312162   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.312170   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.312189   60628 retry.go:31] will retry after 495.705235ms: missing components: kube-dns, kube-proxy
	I1212 21:16:11.820167   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:11.820200   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:16:11.820205   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:11.820212   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:11.820217   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:11.820222   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 21:16:11.820226   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:11.820232   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:11.820237   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:16:11.820254   60628 retry.go:31] will retry after 635.810888ms: missing components: kube-dns, kube-proxy
	I1212 21:16:12.464096   60628 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:12.464139   60628 system_pods.go:89] "coredns-76f75df574-466sr" [90a22351-0561-4345-8997-ce6b7ab438f7] Running
	I1212 21:16:12.464145   60628 system_pods.go:89] "etcd-no-preload-343495" [5e054af6-67c5-4d63-8b6b-076217c32723] Running
	I1212 21:16:12.464149   60628 system_pods.go:89] "kube-apiserver-no-preload-343495" [f99a12a0-bc44-4bb8-9340-f34139409ed8] Running
	I1212 21:16:12.464154   60628 system_pods.go:89] "kube-controller-manager-no-preload-343495" [04a3c991-e527-4e54-98ac-462953c48bc0] Running
	I1212 21:16:12.464158   60628 system_pods.go:89] "kube-proxy-glrvd" [57b708fd-e950-4fe9-adbc-dece2985edd1] Running
	I1212 21:16:12.464162   60628 system_pods.go:89] "kube-scheduler-no-preload-343495" [774eced6-5ede-41fd-9fd0-5338e83e4c93] Running
	I1212 21:16:12.464168   60628 system_pods.go:89] "metrics-server-57f55c9bc5-xc79n" [fda5e773-f1a9-4f99-a0e0-06d67d5f1705] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:12.464176   60628 system_pods.go:89] "storage-provisioner" [2ba6a30c-79ab-43e4-92fe-7c11a6046571] Running
	I1212 21:16:12.464185   60628 system_pods.go:126] duration metric: took 2.184010512s to wait for k8s-apps to be running ...
	I1212 21:16:12.464192   60628 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:12.464272   60628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:12.480090   60628 system_svc.go:56] duration metric: took 15.887114ms WaitForService to wait for kubelet.
	I1212 21:16:12.480124   60628 kubeadm.go:581] duration metric: took 3.960778694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:12.480163   60628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:12.483564   60628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:12.483589   60628 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:12.483601   60628 node_conditions.go:105] duration metric: took 3.433071ms to run NodePressure ...
	I1212 21:16:12.483612   60628 start.go:228] waiting for startup goroutines ...
	I1212 21:16:12.483617   60628 start.go:233] waiting for cluster config update ...
	I1212 21:16:12.483626   60628 start.go:242] writing updated cluster config ...
	I1212 21:16:12.483887   60628 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:12.534680   60628 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 21:16:12.536622   60628 out.go:177] * Done! kubectl is now configured to use "no-preload-343495" cluster and "default" namespace by default
	I1212 21:16:15.424662   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:15.424691   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:15.424697   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:15.424701   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:15.424707   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:15.424712   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:15.424728   60948 retry.go:31] will retry after 4.920488737s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:20.351078   60948 system_pods.go:86] 5 kube-system pods found
	I1212 21:16:20.351107   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:20.351112   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:20.351116   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:20.351122   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:20.351127   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:20.351143   60948 retry.go:31] will retry after 5.718245097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:26.077073   60948 system_pods.go:86] 6 kube-system pods found
	I1212 21:16:26.077097   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:26.077103   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:26.077107   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Pending
	I1212 21:16:26.077111   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:26.077117   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:26.077122   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:26.077139   60948 retry.go:31] will retry after 8.251519223s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1212 21:16:34.334757   60948 system_pods.go:86] 8 kube-system pods found
	I1212 21:16:34.334782   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:34.334787   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:34.334791   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:34.334796   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Pending
	I1212 21:16:34.334799   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:34.334804   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:34.334811   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:34.334815   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:34.334830   60948 retry.go:31] will retry after 8.584990669s: missing components: kube-apiserver, kube-scheduler
	I1212 21:16:42.927591   60948 system_pods.go:86] 9 kube-system pods found
	I1212 21:16:42.927618   60948 system_pods.go:89] "coredns-5644d7b6d9-bd52f" [0ffc3a15-39e3-43be-a904-12e36683f6ea] Running
	I1212 21:16:42.927624   60948 system_pods.go:89] "coredns-5644d7b6d9-cn5ch" [1526d85b-394f-4ba3-b35c-f8d134080ea7] Running
	I1212 21:16:42.927628   60948 system_pods.go:89] "etcd-old-k8s-version-372099" [a9f11c2e-23b6-453d-9bc1-b5f90b887c26] Running
	I1212 21:16:42.927632   60948 system_pods.go:89] "kube-apiserver-old-k8s-version-372099" [293c3d5c-d293-479d-8eb1-e4564b9ac9c3] Running
	I1212 21:16:42.927637   60948 system_pods.go:89] "kube-controller-manager-old-k8s-version-372099" [995d3a8b-06f0-44b2-aa45-e549152a7d9d] Running
	I1212 21:16:42.927642   60948 system_pods.go:89] "kube-proxy-vzqkz" [099e5cd7-0ded-49f0-950a-9eb0e76731bd] Running
	I1212 21:16:42.927647   60948 system_pods.go:89] "kube-scheduler-old-k8s-version-372099" [0e3e4e58-289f-47f1-999b-8fd87b90558a] Running
	I1212 21:16:42.927653   60948 system_pods.go:89] "metrics-server-74d5856cc6-7bvqn" [29cb3b64-a573-46a0-89c7-baf4e6453de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 21:16:42.927658   60948 system_pods.go:89] "storage-provisioner" [aca70999-fc12-4544-93d1-9f61719412b5] Running
	I1212 21:16:42.927667   60948 system_pods.go:126] duration metric: took 45.639007967s to wait for k8s-apps to be running ...
	I1212 21:16:42.927673   60948 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:16:42.927715   60948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:16:42.948680   60948 system_svc.go:56] duration metric: took 20.9943ms WaitForService to wait for kubelet.
	I1212 21:16:42.948711   60948 kubeadm.go:581] duration metric: took 56.56793182s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 21:16:42.948735   60948 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:16:42.952462   60948 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 21:16:42.952493   60948 node_conditions.go:123] node cpu capacity is 2
	I1212 21:16:42.952505   60948 node_conditions.go:105] duration metric: took 3.763543ms to run NodePressure ...
	I1212 21:16:42.952518   60948 start.go:228] waiting for startup goroutines ...
	I1212 21:16:42.952527   60948 start.go:233] waiting for cluster config update ...
	I1212 21:16:42.952541   60948 start.go:242] writing updated cluster config ...
	I1212 21:16:42.952847   60948 ssh_runner.go:195] Run: rm -f paused
	I1212 21:16:43.001964   60948 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 21:16:43.003962   60948 out.go:177] 
	W1212 21:16:43.005327   60948 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 21:16:43.006827   60948 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 21:16:43.008259   60948 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-372099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2023-12-12 21:09:39 UTC, ends at Tue 2023-12-12 21:29:34 UTC. --
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.114150979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416574114124603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=e001f27f-0143-433f-ab4b-43ac20ffcf21 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.118540910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46fae371-2f10-402d-9bb9-3d3bc825ea27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.118645772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46fae371-2f10-402d-9bb9-3d3bc825ea27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.118986051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46fae371-2f10-402d-9bb9-3d3bc825ea27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.173323057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=de626281-37fd-4c5a-86a3-748966d3f6aa name=/runtime.v1.RuntimeService/Version
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.173383207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=de626281-37fd-4c5a-86a3-748966d3f6aa name=/runtime.v1.RuntimeService/Version
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.174892301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=650ebc72-e6f5-44a1-bb4a-1964778c6c57 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.175420703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416574175402222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=650ebc72-e6f5-44a1-bb4a-1964778c6c57 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.176072969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49022854-4d2f-4fd0-8a19-1a9ddb04121c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.176164407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49022854-4d2f-4fd0-8a19-1a9ddb04121c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.176388506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49022854-4d2f-4fd0-8a19-1a9ddb04121c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.219830228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=75e63fb7-43a7-4a48-893f-32d22e6599d3 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.219923874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=75e63fb7-43a7-4a48-893f-32d22e6599d3 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.222309757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=203672b4-48d3-40b5-a5cb-1c7bcb6ada84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.222905144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416574222883515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=203672b4-48d3-40b5-a5cb-1c7bcb6ada84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.223571172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3bc14d2f-7016-4857-82b5-2f680d9cfb6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.223625731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3bc14d2f-7016-4857-82b5-2f680d9cfb6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.223913907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3bc14d2f-7016-4857-82b5-2f680d9cfb6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.262396040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=19aaac34-27ac-4075-99ee-97625f275f97 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.262531937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=19aaac34-27ac-4075-99ee-97625f275f97 name=/runtime.v1.RuntimeService/Version
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.264441012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c3c9b830-cdad-4800-b6c9-c42578228d7e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.265306259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702416574265280050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c3c9b830-cdad-4800-b6c9-c42578228d7e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.266154520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6f90d034-9d8a-41e1-947e-9a40a8171760 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.266210515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6f90d034-9d8a-41e1-947e-9a40a8171760 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 21:29:34 old-k8s-version-372099 crio[715]: time="2023-12-12 21:29:34.266423753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c,PodSandboxId:bf65a58303fb9cdfe9312121960980df8619014ccd1711a9ed79e6a97e0a92c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702415749534433440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca70999-fc12-4544-93d1-9f61719412b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfbd701,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d,PodSandboxId:75407785556de29b8ffadc8404f84209ec33846ec536f0ff762e76711a85da31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702415749069514010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vzqkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099e5cd7-0ded-49f0-950a-9eb0e76731bd,},Annotations:map[string]string{io.kubernetes.container.hash: d3e31c37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f,PodSandboxId:50d05686423775220482c822797b7192d2f06b5d37bb8095751b0c65ba533139,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748767415418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-bd52f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ffc3a15-39e3-43be-a904-12e36683f6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249,PodSandboxId:4bdcafdd6af6ff8f3050174713033d657be5b6dd788818f1cc21ab15841688fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702415748747298975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-cn5ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1526d85b-394f-4ba3-b35c-f8d134080ea7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 3becb3ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287,PodSandboxId:bb301472398ec210baea586d2db3b984c6acb90724ae9512f10c3ee305a1d0e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702415722315356345,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56dfa18635f0257955580e4d5610489,},Annotations:map[string]string{io.kubernetes.container.hash: ac1e2798,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3,PodSandboxId:1970b67fe43759d08732a771bba9efd580b24da4db333b23634ed1e9cb5d8662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702415721477086455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42,PodSandboxId:64cf126e7e9ad4fc985ab0a42c0919b43afda1ce7cae7d3da684716a49ea415a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702415721041152377,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702415720512718609,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b,PodSandboxId:55f427319ae8cc9687b46f37b1bfd4b2a2c6347569756bc958b9a881e494c748,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702415412565352317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-old-k8s-version-372099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea80ba674ac78bcd1f4e0fcbbb7e1ab,},Annotations:map[string]string{io.kubernetes.container.hash: a062652e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6f90d034-9d8a-41e1-947e-9a40a8171760 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a86d4c17d7119       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   bf65a58303fb9       storage-provisioner
	d830469561f0e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   75407785556de       kube-proxy-vzqkz
	6084f17e07859       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   50d0568642377       coredns-5644d7b6d9-bd52f
	b1b2934a77797       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   4bdcafdd6af6f       coredns-5644d7b6d9-cn5ch
	b1729474db3e1       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   bb301472398ec       etcd-old-k8s-version-372099
	984fd725da2f0       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   1970b67fe4375       kube-scheduler-old-k8s-version-372099
	99f95412173dd       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   64cf126e7e9ad       kube-controller-manager-old-k8s-version-372099
	7efe2c7c23a8f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            1                   55f427319ae8c       kube-apiserver-old-k8s-version-372099
	457b7a6cb9832       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   19 minutes ago      Exited              kube-apiserver            0                   55f427319ae8c       kube-apiserver-old-k8s-version-372099
	
	
	==> coredns [6084f17e07859324146da8180f5773d267827395141ea82667ba2d3ead9cd41f] <==
	.:53
	2023-12-12T21:15:49.277Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-12T21:15:49.277Z [INFO] CoreDNS-1.6.2
	2023-12-12T21:15:49.277Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-12T21:15:49.296Z [INFO] 127.0.0.1:45076 - 57804 "HINFO IN 4110017162655409842.5650151957092772318. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017309537s
	
	
	==> coredns [b1b2934a77797c2b572bc0ee838a6b38ea19686d4bf9cff5ff9c22249a6a5249] <==
	.:53
	2023-12-12T21:15:49.174Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-12T21:15:49.175Z [INFO] CoreDNS-1.6.2
	2023-12-12T21:15:49.175Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-12T21:15:49.188Z [INFO] 127.0.0.1:43809 - 5847 "HINFO IN 7768456833403375853.4299604537471653683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015274703s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-372099
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-372099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=old-k8s-version-372099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T21_15_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 21:15:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 21:29:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 21:29:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 21:29:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 21:29:26 +0000   Tue, 12 Dec 2023 21:15:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    old-k8s-version-372099
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 fc3e555bcb6b471382a2733409d8eed0
	 System UUID:                fc3e555b-cb6b-4713-82a2-733409d8eed0
	 Boot ID:                    86498489-2351-495d-9062-a47090f2d467
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-bd52f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                coredns-5644d7b6d9-cn5ch                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-372099                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-372099             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-372099    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-vzqkz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-372099             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-7bvqn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 14m                kubelet, old-k8s-version-372099     Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet, old-k8s-version-372099     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-372099     Node old-k8s-version-372099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet, old-k8s-version-372099     Node old-k8s-version-372099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet, old-k8s-version-372099     Node old-k8s-version-372099 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-372099  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec12 21:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068504] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.746890] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.558480] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153277] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.442557] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.119154] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.118210] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.160906] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.121507] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.236518] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Dec12 21:10] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[  +0.428968] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.952906] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.277966] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 21:15] systemd-fstab-generator[3139]: Ignoring "noauto" for root device
	[ +29.481995] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.543941] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [b1729474db3e1e8098c9bd790b1a8f5d761848b680ce9a60f9c20af90da75287] <==
	2023-12-12 21:15:22.464407 I | raft: f9de38f1a7e06692 became follower at term 0
	2023-12-12 21:15:22.464462 I | raft: newRaft f9de38f1a7e06692 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-12 21:15:22.464486 I | raft: f9de38f1a7e06692 became follower at term 1
	2023-12-12 21:15:22.475212 W | auth: simple token is not cryptographically signed
	2023-12-12 21:15:22.488048 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-12 21:15:22.490302 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 21:15:22.490560 I | embed: listening for metrics on http://192.168.39.202:2381
	2023-12-12 21:15:22.490992 I | etcdserver: f9de38f1a7e06692 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-12 21:15:22.491475 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 21:15:22.491896 I | etcdserver/membership: added member f9de38f1a7e06692 [https://192.168.39.202:2380] to cluster e4e52c0b9ecc5e15
	2023-12-12 21:15:22.565099 I | raft: f9de38f1a7e06692 is starting a new election at term 1
	2023-12-12 21:15:22.565381 I | raft: f9de38f1a7e06692 became candidate at term 2
	2023-12-12 21:15:22.565513 I | raft: f9de38f1a7e06692 received MsgVoteResp from f9de38f1a7e06692 at term 2
	2023-12-12 21:15:22.565622 I | raft: f9de38f1a7e06692 became leader at term 2
	2023-12-12 21:15:22.565646 I | raft: raft.node: f9de38f1a7e06692 elected leader f9de38f1a7e06692 at term 2
	2023-12-12 21:15:22.566299 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-12 21:15:22.567569 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-12 21:15:22.568540 I | etcdserver: published {Name:old-k8s-version-372099 ClientURLs:[https://192.168.39.202:2379]} to cluster e4e52c0b9ecc5e15
	2023-12-12 21:15:22.568679 I | embed: ready to serve client requests
	2023-12-12 21:15:22.572099 I | embed: serving client requests on 192.168.39.202:2379
	2023-12-12 21:15:22.572412 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-12 21:15:22.572665 I | embed: ready to serve client requests
	2023-12-12 21:15:22.583507 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 21:25:22.604253 I | mvcc: store.index: compact 679
	2023-12-12 21:25:22.606982 I | mvcc: finished scheduled compaction at 679 (took 1.985822ms)
	
	
	==> kernel <==
	 21:29:34 up 20 min,  0 users,  load average: 0.51, 0.28, 0.15
	Linux old-k8s-version-372099 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [457b7a6cb9832c94d1f52e5a12a019727861988744f49cd541a523cca8f6355b] <==
	W1212 21:15:18.186727       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.194111       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.202128       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.209448       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.224405       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.227851       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.247326       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.265874       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.277224       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.279283       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.288240       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.305606       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.307576       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.317120       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.325137       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.329464       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.329568       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.350643       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.360441       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.366709       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.367089       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.367093       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.376956       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.384826       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1212 21:15:18.395548       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [7efe2c7c23a8f46c267fc8fc29e02a91db9136e66042c2d6fc0b5d94d876c51f] <==
	I1212 21:21:26.900032       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:21:26.900369       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:21:26.900491       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:21:26.900623       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:23:26.901434       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:23:26.901896       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:23:26.902014       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:23:26.902045       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:25:26.903063       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:25:26.903177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:25:26.903237       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:25:26.903268       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:26:26.903643       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:26:26.903825       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:26:26.903868       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:26:26.903876       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 21:28:26.904355       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 21:28:26.904485       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 21:28:26.904562       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 21:28:26.904570       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [99f95412173dd39b800238586e36b39a04baaa378b0093d705c78f8585d48d42] <==
	W1212 21:23:14.410402       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:23:20.170829       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:23:46.412738       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:23:50.423033       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:24:18.416019       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:24:20.675342       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:24:50.418323       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:24:50.927479       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1212 21:25:21.179369       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:25:22.420487       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:25:51.432302       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:25:54.423118       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:26:21.684558       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:26:26.425154       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:26:51.937040       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:26:58.427437       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:27:22.190339       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:27:30.429671       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:27:52.442931       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:28:02.431959       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:28:22.695315       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:28:34.434162       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:28:52.947998       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 21:29:06.436379       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 21:29:23.200221       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [d830469561f0e95ddaa1adfad5303c0e8ed60f1658e8b117842250005fcf8c5d] <==
	W1212 21:15:49.532549       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1212 21:15:49.558088       1 node.go:135] Successfully retrieved node IP: 192.168.39.202
	I1212 21:15:49.558235       1 server_others.go:149] Using iptables Proxier.
	I1212 21:15:49.560810       1 server.go:529] Version: v1.16.0
	I1212 21:15:49.564621       1 config.go:313] Starting service config controller
	I1212 21:15:49.564683       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1212 21:15:49.562710       1 config.go:131] Starting endpoints config controller
	I1212 21:15:49.564738       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1212 21:15:49.666331       1 shared_informer.go:204] Caches are synced for service config 
	I1212 21:15:49.676990       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [984fd725da2f0773d03c24b7016ff8e06dcea899f6d38f767d71d613399f3fd3] <==
	I1212 21:15:25.898220       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1212 21:15:25.957187       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 21:15:25.965456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:15:25.965580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:15:25.965658       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 21:15:25.965903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 21:15:25.966403       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:15:25.968129       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:15:25.968282       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:25.968694       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 21:15:25.972375       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:25.972472       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:26.963666       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 21:15:26.968920       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 21:15:26.974126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 21:15:26.975156       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 21:15:26.978351       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 21:15:26.979697       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 21:15:26.982620       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 21:15:26.983417       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:26.984608       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 21:15:26.987867       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 21:15:26.988559       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 21:15:46.320945       1 factory.go:585] pod is already present in the activeQ
	E1212 21:15:46.445012       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 21:09:39 UTC, ends at Tue 2023-12-12 21:29:34 UTC. --
	Dec 12 21:25:05 old-k8s-version-372099 kubelet[3145]: E1212 21:25:05.821346    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:19 old-k8s-version-372099 kubelet[3145]: E1212 21:25:19.825652    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:19 old-k8s-version-372099 kubelet[3145]: E1212 21:25:19.922328    3145 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 12 21:25:32 old-k8s-version-372099 kubelet[3145]: E1212 21:25:32.821961    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:44 old-k8s-version-372099 kubelet[3145]: E1212 21:25:44.821469    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:25:58 old-k8s-version-372099 kubelet[3145]: E1212 21:25:58.821864    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:26:11 old-k8s-version-372099 kubelet[3145]: E1212 21:26:11.821891    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:26:22 old-k8s-version-372099 kubelet[3145]: E1212 21:26:22.821458    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:26:34 old-k8s-version-372099 kubelet[3145]: E1212 21:26:34.848099    3145 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:26:34 old-k8s-version-372099 kubelet[3145]: E1212 21:26:34.848459    3145 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:26:34 old-k8s-version-372099 kubelet[3145]: E1212 21:26:34.848584    3145 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 21:26:34 old-k8s-version-372099 kubelet[3145]: E1212 21:26:34.848647    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 12 21:26:48 old-k8s-version-372099 kubelet[3145]: E1212 21:26:48.822056    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:27:02 old-k8s-version-372099 kubelet[3145]: E1212 21:27:02.821552    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:27:16 old-k8s-version-372099 kubelet[3145]: E1212 21:27:16.821636    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:27:27 old-k8s-version-372099 kubelet[3145]: E1212 21:27:27.821802    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:27:42 old-k8s-version-372099 kubelet[3145]: E1212 21:27:42.821935    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:27:55 old-k8s-version-372099 kubelet[3145]: E1212 21:27:55.821647    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:28:10 old-k8s-version-372099 kubelet[3145]: E1212 21:28:10.821681    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:28:25 old-k8s-version-372099 kubelet[3145]: E1212 21:28:25.821425    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:28:37 old-k8s-version-372099 kubelet[3145]: E1212 21:28:37.821454    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:28:51 old-k8s-version-372099 kubelet[3145]: E1212 21:28:51.822563    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:29:03 old-k8s-version-372099 kubelet[3145]: E1212 21:29:03.821729    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:29:17 old-k8s-version-372099 kubelet[3145]: E1212 21:29:17.821900    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 21:29:30 old-k8s-version-372099 kubelet[3145]: E1212 21:29:30.822170    3145 pod_workers.go:191] Error syncing pod 29cb3b64-a573-46a0-89c7-baf4e6453de8 ("metrics-server-74d5856cc6-7bvqn_kube-system(29cb3b64-a573-46a0-89c7-baf4e6453de8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [a86d4c17d71192fc6d783058f3c344c617ba5f1b6b3f13fb73c6f18f86ad927c] <==
	I1212 21:15:49.687955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 21:15:49.704387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 21:15:49.704513       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 21:15:49.717483       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 21:15:49.719760       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-372099_9c373048-b63a-4f19-8ac7-5f4a944596ed!
	I1212 21:15:49.719609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"844a18be-5145-4e70-9a82-93e0dff5efba", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-372099_9c373048-b63a-4f19-8ac7-5f4a944596ed became leader
	I1212 21:15:49.824533       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-372099_9c373048-b63a-4f19-8ac7-5f4a944596ed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-372099 -n old-k8s-version-372099
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-372099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-7bvqn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-372099 describe pod metrics-server-74d5856cc6-7bvqn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-372099 describe pod metrics-server-74d5856cc6-7bvqn: exit status 1 (75.096409ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-7bvqn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-372099 describe pod metrics-server-74d5856cc6-7bvqn: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (229.43s)

                                                
                                    

Test pass (240/305)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.86
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 9.34
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 10.9
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.56
27 TestOffline 109.57
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 148.21
34 TestAddons/parallel/Registry 15.5
36 TestAddons/parallel/InspektorGadget 11.05
37 TestAddons/parallel/MetricsServer 6.11
38 TestAddons/parallel/HelmTiller 11.27
40 TestAddons/parallel/CSI 89.83
41 TestAddons/parallel/Headlamp 15.82
42 TestAddons/parallel/CloudSpanner 5.99
43 TestAddons/parallel/LocalPath 56.49
44 TestAddons/parallel/NvidiaDevicePlugin 5.85
47 TestAddons/serial/GCPAuth/Namespaces 0.14
49 TestCertOptions 78.77
50 TestCertExpiration 336.79
52 TestForceSystemdFlag 70.8
53 TestForceSystemdEnv 68.94
55 TestKVMDriverInstallOrUpdate 1.57
59 TestErrorSpam/setup 47.18
60 TestErrorSpam/start 0.4
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.66
63 TestErrorSpam/unpause 1.76
64 TestErrorSpam/stop 2.26
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 60.27
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 40.91
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
76 TestFunctional/serial/CacheCmd/cache/add_local 1.06
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 32.8
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.54
87 TestFunctional/serial/LogsFileCmd 1.61
88 TestFunctional/serial/InvalidService 4.16
90 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DashboardCmd 14.58
92 TestFunctional/parallel/DryRun 0.29
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.89
98 TestFunctional/parallel/ServiceCmdConnect 24.71
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 49.61
102 TestFunctional/parallel/SSHCmd 0.49
103 TestFunctional/parallel/CpCmd 1.66
104 TestFunctional/parallel/MySQL 26.24
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.63
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.19
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 0.79
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.37
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
124 TestFunctional/parallel/ImageCommands/ImageBuild 6.21
125 TestFunctional/parallel/ImageCommands/Setup 1.04
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.34
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.77
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.89
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.18
133 TestFunctional/parallel/ServiceCmd/DeployApp 9.76
134 TestFunctional/parallel/MountCmd/any-port 7.89
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.81
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.27
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.56
138 TestFunctional/parallel/MountCmd/specific-port 2.17
139 TestFunctional/parallel/ServiceCmd/List 1.35
140 TestFunctional/parallel/ServiceCmd/JSONOutput 1.38
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
143 TestFunctional/parallel/ServiceCmd/Format 0.44
150 TestFunctional/parallel/ServiceCmd/URL 0.37
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestIngressAddonLegacy/StartLegacyK8sCluster 105.36
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.63
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
167 TestJSONOutput/start/Command 58.83
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.71
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.65
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.1
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 97.55
199 TestMountStart/serial/StartWithMountFirst 26.2
200 TestMountStart/serial/VerifyMountFirst 0.4
201 TestMountStart/serial/StartWithMountSecond 28.9
202 TestMountStart/serial/VerifyMountSecond 0.4
203 TestMountStart/serial/DeleteFirst 0.88
204 TestMountStart/serial/VerifyMountPostDelete 0.4
205 TestMountStart/serial/Stop 1.16
206 TestMountStart/serial/RestartStopped 21.6
207 TestMountStart/serial/VerifyMountPostStop 0.4
210 TestMultiNode/serial/FreshStart2Nodes 112.39
211 TestMultiNode/serial/DeployApp2Nodes 4.43
213 TestMultiNode/serial/AddNode 44.71
214 TestMultiNode/serial/MultiNodeLabels 0.06
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.76
217 TestMultiNode/serial/StopNode 3.02
218 TestMultiNode/serial/StartAfterStop 29.21
220 TestMultiNode/serial/DeleteNode 1.59
222 TestMultiNode/serial/RestartMultiNode 447.14
223 TestMultiNode/serial/ValidateNameConflict 48.84
230 TestScheduledStopUnix 117.62
236 TestKubernetesUpgrade 194.62
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
240 TestNoKubernetes/serial/StartWithK8s 107.55
241 TestNoKubernetes/serial/StartWithStopK8s 27.19
242 TestStoppedBinaryUpgrade/Setup 0.44
244 TestNoKubernetes/serial/Start 28.24
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
246 TestNoKubernetes/serial/ProfileList 1.22
247 TestNoKubernetes/serial/Stop 1.87
248 TestNoKubernetes/serial/StartNoArgs 22.43
249 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
257 TestNetworkPlugins/group/false 4.14
262 TestPause/serial/Start 103.75
270 TestPause/serial/SecondStartNoReconfiguration 55.54
271 TestNetworkPlugins/group/auto/Start 117.66
272 TestPause/serial/Pause 0.84
273 TestPause/serial/VerifyStatus 0.31
274 TestPause/serial/Unpause 1.18
275 TestPause/serial/PauseAgain 1.45
276 TestPause/serial/DeletePaused 1.26
277 TestPause/serial/VerifyDeletedResources 3.43
278 TestNetworkPlugins/group/kindnet/Start 101.71
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.43
280 TestNetworkPlugins/group/calico/Start 119.38
281 TestNetworkPlugins/group/auto/KubeletFlags 0.24
282 TestNetworkPlugins/group/auto/NetCatPod 13.38
283 TestNetworkPlugins/group/auto/DNS 0.17
284 TestNetworkPlugins/group/auto/Localhost 0.15
285 TestNetworkPlugins/group/auto/HairPin 0.15
286 TestNetworkPlugins/group/custom-flannel/Start 87.48
287 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
288 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
289 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
290 TestNetworkPlugins/group/kindnet/DNS 0.19
291 TestNetworkPlugins/group/kindnet/Localhost 0.16
292 TestNetworkPlugins/group/kindnet/HairPin 0.18
293 TestNetworkPlugins/group/enable-default-cni/Start 102.99
294 TestNetworkPlugins/group/calico/ControllerPod 5.04
295 TestNetworkPlugins/group/calico/KubeletFlags 0.23
296 TestNetworkPlugins/group/calico/NetCatPod 12.4
297 TestNetworkPlugins/group/calico/DNS 0.23
298 TestNetworkPlugins/group/calico/Localhost 0.19
299 TestNetworkPlugins/group/calico/HairPin 0.19
300 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
301 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.47
302 TestNetworkPlugins/group/flannel/Start 90.32
303 TestNetworkPlugins/group/custom-flannel/DNS 0.21
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
306 TestNetworkPlugins/group/bridge/Start 110.92
308 TestStartStop/group/old-k8s-version/serial/FirstStart 162.84
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.31
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
315 TestStartStop/group/no-preload/serial/FirstStart 91.04
316 TestNetworkPlugins/group/flannel/ControllerPod 5.03
317 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
318 TestNetworkPlugins/group/flannel/NetCatPod 13.58
319 TestNetworkPlugins/group/flannel/DNS 0.19
320 TestNetworkPlugins/group/flannel/Localhost 0.16
321 TestNetworkPlugins/group/flannel/HairPin 0.17
323 TestStartStop/group/embed-certs/serial/FirstStart 66.54
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
325 TestNetworkPlugins/group/bridge/NetCatPod 12.35
326 TestNetworkPlugins/group/bridge/DNS 0.2
327 TestNetworkPlugins/group/bridge/Localhost 0.17
328 TestNetworkPlugins/group/bridge/HairPin 0.14
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.41
331 TestStartStop/group/no-preload/serial/DeployApp 9.04
332 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
334 TestStartStop/group/embed-certs/serial/DeployApp 8.5
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.81
337 TestStartStop/group/old-k8s-version/serial/DeployApp 8.47
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.94
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
344 TestStartStop/group/no-preload/serial/SecondStart 704.95
347 TestStartStop/group/embed-certs/serial/SecondStart 570.29
348 TestStartStop/group/old-k8s-version/serial/SecondStart 706.28
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 545.71
360 TestStartStop/group/newest-cni/serial/FirstStart 62.02
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
363 TestStartStop/group/newest-cni/serial/Stop 3.12
364 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/newest-cni/serial/SecondStart 49.88
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
369 TestStartStop/group/newest-cni/serial/Pause 2.64
x
+
TestDownloadOnly/v1.16.0/json-events (10.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931277 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931277 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.857088764s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931277
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931277: exit status 85 (71.979408ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |          |
	|         | -p download-only-931277        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 19:56:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:56:38.879469   16468 out.go:296] Setting OutFile to fd 1 ...
	I1212 19:56:38.879734   16468 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:56:38.879744   16468 out.go:309] Setting ErrFile to fd 2...
	I1212 19:56:38.879749   16468 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:56:38.879922   16468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	W1212 19:56:38.880031   16468 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17734-9188/.minikube/config/config.json: open /home/jenkins/minikube-integration/17734-9188/.minikube/config/config.json: no such file or directory
	I1212 19:56:38.880583   16468 out.go:303] Setting JSON to true
	I1212 19:56:38.881418   16468 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2353,"bootTime":1702408646,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:56:38.881477   16468 start.go:138] virtualization: kvm guest
	I1212 19:56:38.884101   16468 out.go:97] [download-only-931277] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 19:56:38.885640   16468 out.go:169] MINIKUBE_LOCATION=17734
	W1212 19:56:38.884223   16468 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 19:56:38.884294   16468 notify.go:220] Checking for updates...
	I1212 19:56:38.888502   16468 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:56:38.889871   16468 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 19:56:38.891210   16468 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 19:56:38.892479   16468 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:56:38.894710   16468 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:56:38.894906   16468 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 19:56:38.994013   16468 out.go:97] Using the kvm2 driver based on user configuration
	I1212 19:56:38.994043   16468 start.go:298] selected driver: kvm2
	I1212 19:56:38.994049   16468 start.go:902] validating driver "kvm2" against <nil>
	I1212 19:56:38.994378   16468 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:56:38.994534   16468 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 19:56:39.009172   16468 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 19:56:39.009263   16468 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 19:56:39.009728   16468 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1212 19:56:39.009870   16468 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:56:39.009919   16468 cni.go:84] Creating CNI manager for ""
	I1212 19:56:39.009933   16468 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:56:39.009943   16468 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:56:39.009949   16468 start_flags.go:323] config:
	{Name:download-only-931277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-931277 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:56:39.010143   16468 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:56:39.012234   16468 out.go:97] Downloading VM boot image ...
	I1212 19:56:39.012276   16468 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 19:56:42.211393   16468 out.go:97] Starting control plane node download-only-931277 in cluster download-only-931277
	I1212 19:56:42.211415   16468 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 19:56:42.234067   16468 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 19:56:42.234097   16468 cache.go:56] Caching tarball of preloaded images
	I1212 19:56:42.234232   16468 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 19:56:42.236122   16468 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 19:56:42.236139   16468 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 19:56:42.276906   16468 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 19:56:48.319124   16468 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 19:56:48.319214   16468 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931277"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931277 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931277 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.334849174s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931277
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931277: exit status 85 (71.38341ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |          |
	|         | -p download-only-931277        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |          |
	|         | -p download-only-931277        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 19:56:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:56:49.809734   16525 out.go:296] Setting OutFile to fd 1 ...
	I1212 19:56:49.809888   16525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:56:49.809898   16525 out.go:309] Setting ErrFile to fd 2...
	I1212 19:56:49.809903   16525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:56:49.810120   16525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	W1212 19:56:49.810242   16525 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17734-9188/.minikube/config/config.json: open /home/jenkins/minikube-integration/17734-9188/.minikube/config/config.json: no such file or directory
	I1212 19:56:49.810695   16525 out.go:303] Setting JSON to true
	I1212 19:56:49.811626   16525 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2364,"bootTime":1702408646,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:56:49.811698   16525 start.go:138] virtualization: kvm guest
	I1212 19:56:49.814102   16525 out.go:97] [download-only-931277] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 19:56:49.815832   16525 out.go:169] MINIKUBE_LOCATION=17734
	I1212 19:56:49.814326   16525 notify.go:220] Checking for updates...
	I1212 19:56:49.819097   16525 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:56:49.820867   16525 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 19:56:49.822580   16525 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 19:56:49.824358   16525 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:56:49.827181   16525 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:56:49.827656   16525 config.go:182] Loaded profile config "download-only-931277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1212 19:56:49.827725   16525 start.go:810] api.Load failed for download-only-931277: filestore "download-only-931277": Docker machine "download-only-931277" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 19:56:49.827826   16525 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 19:56:49.827871   16525 start.go:810] api.Load failed for download-only-931277: filestore "download-only-931277": Docker machine "download-only-931277" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 19:56:49.859336   16525 out.go:97] Using the kvm2 driver based on existing profile
	I1212 19:56:49.859361   16525 start.go:298] selected driver: kvm2
	I1212 19:56:49.859368   16525 start.go:902] validating driver "kvm2" against &{Name:download-only-931277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-931277 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:56:49.859793   16525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:56:49.859873   16525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 19:56:49.874051   16525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 19:56:49.875036   16525 cni.go:84] Creating CNI manager for ""
	I1212 19:56:49.875058   16525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:56:49.875076   16525 start_flags.go:323] config:
	{Name:download-only-931277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-931277 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:56:49.875310   16525 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:56:49.877200   16525 out.go:97] Starting control plane node download-only-931277 in cluster download-only-931277
	I1212 19:56:49.877214   16525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 19:56:49.906362   16525 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 19:56:49.906412   16525 cache.go:56] Caching tarball of preloaded images
	I1212 19:56:49.906677   16525 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 19:56:49.908842   16525 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 19:56:49.908865   16525 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 19:56:49.934934   16525 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931277"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931277 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931277 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.898578389s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931277
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931277: exit status 85 (70.994694ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |          |
	|         | -p download-only-931277           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |          |
	|         | -p download-only-931277           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-931277 | jenkins | v1.32.0 | 12 Dec 23 19:56 UTC |          |
	|         | -p download-only-931277           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 19:56:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:56:59.217802   16581 out.go:296] Setting OutFile to fd 1 ...
	I1212 19:56:59.217978   16581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:56:59.217988   16581 out.go:309] Setting ErrFile to fd 2...
	I1212 19:56:59.217993   16581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 19:56:59.218233   16581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	W1212 19:56:59.218389   16581 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17734-9188/.minikube/config/config.json: open /home/jenkins/minikube-integration/17734-9188/.minikube/config/config.json: no such file or directory
	I1212 19:56:59.218887   16581 out.go:303] Setting JSON to true
	I1212 19:56:59.219866   16581 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2373,"bootTime":1702408646,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:56:59.219927   16581 start.go:138] virtualization: kvm guest
	I1212 19:56:59.222020   16581 out.go:97] [download-only-931277] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 19:56:59.223481   16581 out.go:169] MINIKUBE_LOCATION=17734
	I1212 19:56:59.222129   16581 notify.go:220] Checking for updates...
	I1212 19:56:59.226249   16581 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:56:59.227601   16581 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 19:56:59.228929   16581 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 19:56:59.230177   16581 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:56:59.233052   16581 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:56:59.233471   16581 config.go:182] Loaded profile config "download-only-931277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 19:56:59.233518   16581 start.go:810] api.Load failed for download-only-931277: filestore "download-only-931277": Docker machine "download-only-931277" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 19:56:59.233603   16581 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 19:56:59.233632   16581 start.go:810] api.Load failed for download-only-931277: filestore "download-only-931277": Docker machine "download-only-931277" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 19:56:59.264518   16581 out.go:97] Using the kvm2 driver based on existing profile
	I1212 19:56:59.264541   16581 start.go:298] selected driver: kvm2
	I1212 19:56:59.264546   16581 start.go:902] validating driver "kvm2" against &{Name:download-only-931277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-931277 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:56:59.264915   16581 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:56:59.265010   16581 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17734-9188/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 19:56:59.280787   16581 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 19:56:59.281508   16581 cni.go:84] Creating CNI manager for ""
	I1212 19:56:59.281524   16581 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:56:59.281537   16581 start_flags.go:323] config:
	{Name:download-only-931277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-931277 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 19:56:59.281681   16581 iso.go:125] acquiring lock: {Name:mk5ab9bbcc5172beb37341e3e5827925f7e65dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:56:59.283455   16581 out.go:97] Starting control plane node download-only-931277 in cluster download-only-931277
	I1212 19:56:59.283477   16581 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 19:56:59.310074   16581 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 19:56:59.310100   16581 cache.go:56] Caching tarball of preloaded images
	I1212 19:56:59.310253   16581 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 19:56:59.312493   16581 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 19:56:59.312520   16581 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 19:56:59.343163   16581 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:4677ed63f210d912abc47b8c2f7401f7 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 19:57:05.841844   16581 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 19:57:05.841971   16581 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17734-9188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 19:57:06.656520   16581 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 19:57:06.656655   16581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/download-only-931277/config.json ...
	I1212 19:57:06.656848   16581 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 19:57:06.657026   16581 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17734-9188/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931277"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-931277
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-839741 --alsologtostderr --binary-mirror http://127.0.0.1:36279 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-839741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-839741
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (109.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-297885 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-297885 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m48.399248133s)
helpers_test.go:175: Cleaning up "offline-crio-297885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-297885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-297885: (1.173886299s)
--- PASS: TestOffline (109.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-459174
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-459174: exit status 85 (61.928609ms)

                                                
                                                
-- stdout --
	* Profile "addons-459174" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-459174"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-459174
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-459174: exit status 85 (62.206074ms)

                                                
                                                
-- stdout --
	* Profile "addons-459174" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-459174"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (148.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-459174 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-459174 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.207557082s)
--- PASS: TestAddons/Setup (148.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 27.456818ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qhjd2" [354858fb-09b5-436c-abc2-09d0c29c3561] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020655701s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xfflw" [318f5bf5-ed29-48d0-83db-7941bc942aee] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018298423s
addons_test.go:339: (dbg) Run:  kubectl --context addons-459174 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-459174 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-459174 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.611839826s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-plqkt" [5e28b679-2644-4624-872a-bfbc08d5c7bb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.038045229s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-459174
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-459174: (6.007686539s)
--- PASS: TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 27.513142ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-8kvhh" [07e76411-9144-446a-9e56-c452110150e9] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017853101s
addons_test.go:414: (dbg) Run:  kubectl --context addons-459174 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.655354ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gfg56" [4712f730-1a01-40a5-9285-e1d920fd46c2] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014833414s
addons_test.go:472: (dbg) Run:  kubectl --context addons-459174 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-459174 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.544803979s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (89.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 27.878057ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-459174 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/12/12 19:59:54 [DEBUG] GET http://192.168.39.145:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-459174 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c4aac831-1ac5-48a4-b25b-4cf7edaeef5d] Pending
helpers_test.go:344: "task-pv-pod" [c4aac831-1ac5-48a4-b25b-4cf7edaeef5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c4aac831-1ac5-48a4-b25b-4cf7edaeef5d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.036852681s
addons_test.go:583: (dbg) Run:  kubectl --context addons-459174 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-459174 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-459174 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-459174 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-459174 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-459174 delete pod task-pv-pod: (1.135220375s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-459174 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-459174 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-459174 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2eb2dd2d-368c-4783-b556-b88cd4f21ac5] Pending
helpers_test.go:344: "task-pv-pod-restore" [2eb2dd2d-368c-4783-b556-b88cd4f21ac5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2eb2dd2d-368c-4783-b556-b88cd4f21ac5] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.017885295s
addons_test.go:625: (dbg) Run:  kubectl --context addons-459174 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-459174 delete pod task-pv-pod-restore: (1.367028308s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-459174 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-459174 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-459174 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.990843937s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (89.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-459174 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-459174 --alsologtostderr -v=1: (1.775407853s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-plznk" [f88981d5-a11b-40da-8fa9-7f09e276a293] Pending
helpers_test.go:344: "headlamp-777fd4b855-plznk" [f88981d5-a11b-40da-8fa9-7f09e276a293] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-plznk" [f88981d5-a11b-40da-8fa9-7f09e276a293] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.044548737s
--- PASS: TestAddons/parallel/Headlamp (15.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-cmgkw" [c9d4d93d-004e-49ca-8f49-a52b09712fd9] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015265235s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-459174
--- PASS: TestAddons/parallel/CloudSpanner (5.99s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-459174 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-459174 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [99085d45-93e7-4bfe-a04b-4a773dd1c215] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [99085d45-93e7-4bfe-a04b-4a773dd1c215] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [99085d45-93e7-4bfe-a04b-4a773dd1c215] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.062565908s
addons_test.go:890: (dbg) Run:  kubectl --context addons-459174 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 ssh "cat /opt/local-path-provisioner/pvc-b127a7ff-99c7-4435-af5f-944d91801ed2_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-459174 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-459174 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-459174 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-459174 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.561545952s)
--- PASS: TestAddons/parallel/LocalPath (56.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-d5dnz" [934d08ef-405c-4c17-b5cd-ad3ab38cab88] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.016829446s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-459174
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-459174 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-459174 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (78.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-260603 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-260603 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m17.48864552s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-260603 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-260603 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-260603 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-260603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-260603
--- PASS: TestCertOptions (78.77s)

                                                
                                    
x
+
TestCertExpiration (336.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-723808 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1212 20:53:56.433557   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-723808 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m19.921929853s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-723808 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-723808 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m15.589795745s)
helpers_test.go:175: Cleaning up "cert-expiration-723808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-723808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-723808: (1.280743011s)
--- PASS: TestCertExpiration (336.79s)

                                                
                                    
x
+
TestForceSystemdFlag (70.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-675766 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-675766 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.5965571s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-675766 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-675766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-675766
--- PASS: TestForceSystemdFlag (70.80s)

                                                
                                    
x
+
TestForceSystemdEnv (68.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-942683 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-942683 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.913683056s)
helpers_test.go:175: Cleaning up "force-systemd-env-942683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-942683
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-942683: (1.02182553s)
--- PASS: TestForceSystemdEnv (68.94s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.57s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.57s)

                                                
                                    
x
+
TestErrorSpam/setup (47.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-288122 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-288122 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-288122 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-288122 --driver=kvm2  --container-runtime=crio: (47.177000946s)
--- PASS: TestErrorSpam/setup (47.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 stop: (2.09670549s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-288122 --log_dir /tmp/nospam-288122 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17734-9188/.minikube/files/etc/test/nested/copy/16456/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686513 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-686513 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m0.270994855s)
--- PASS: TestFunctional/serial/StartWithProxy (60.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686513 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-686513 --alsologtostderr -v=8: (40.91016296s)
functional_test.go:659: soft start took 40.910936762s for "functional-686513" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-686513 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 cache add registry.k8s.io/pause:3.1: (1.068372744s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 cache add registry.k8s.io/pause:3.3: (1.151225127s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 cache add registry.k8s.io/pause:latest: (1.10161573s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-686513 /tmp/TestFunctionalserialCacheCmdcacheadd_local3674306856/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cache add minikube-local-cache-test:functional-686513
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cache delete minikube-local-cache-test:functional-686513
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-686513
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.84871ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 kubectl -- --context functional-686513 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-686513 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686513 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-686513 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.796764969s)
functional_test.go:757: restart took 32.796893962s for "functional-686513" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-686513 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 logs: (1.534821352s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 logs --file /tmp/TestFunctionalserialLogsFileCmd1398840654/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 logs --file /tmp/TestFunctionalserialLogsFileCmd1398840654/001/logs.txt: (1.609389204s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-686513 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-686513
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-686513: exit status 115 (310.421435ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.70:31947 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-686513 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 config get cpus: exit status 14 (63.22609ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 config get cpus: exit status 14 (74.19552ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-686513 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-686513 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24209: os: process already finished
E1212 20:09:44.589882   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (14.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686513 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-686513 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.945996ms)

                                                
                                                
-- stdout --
	* [functional-686513] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:09:29.675771   24019 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:09:29.675980   24019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:29.675994   24019 out.go:309] Setting ErrFile to fd 2...
	I1212 20:09:29.676001   24019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:29.676279   24019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:09:29.677007   24019 out.go:303] Setting JSON to false
	I1212 20:09:29.677887   24019 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3124,"bootTime":1702408646,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:09:29.677947   24019 start.go:138] virtualization: kvm guest
	I1212 20:09:29.680363   24019 out.go:177] * [functional-686513] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:09:29.682240   24019 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:09:29.682208   24019 notify.go:220] Checking for updates...
	I1212 20:09:29.685373   24019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:29.686939   24019 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:09:29.688421   24019 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:09:29.689744   24019 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:09:29.691168   24019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:09:29.692959   24019 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:09:29.693379   24019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:09:29.693425   24019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:09:29.708160   24019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I1212 20:09:29.708523   24019 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:09:29.709036   24019 main.go:141] libmachine: Using API Version  1
	I1212 20:09:29.709064   24019 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:09:29.709360   24019 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:09:29.709561   24019 main.go:141] libmachine: (functional-686513) Calling .DriverName
	I1212 20:09:29.709821   24019 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:09:29.710091   24019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:09:29.710146   24019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:09:29.725558   24019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I1212 20:09:29.725949   24019 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:09:29.726364   24019 main.go:141] libmachine: Using API Version  1
	I1212 20:09:29.726416   24019 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:09:29.726748   24019 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:09:29.726953   24019 main.go:141] libmachine: (functional-686513) Calling .DriverName
	I1212 20:09:29.760473   24019 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 20:09:29.761893   24019 start.go:298] selected driver: kvm2
	I1212 20:09:29.761904   24019 start.go:902] validating driver "kvm2" against &{Name:functional-686513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-686513 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.70 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:09:29.762016   24019 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:09:29.764152   24019 out.go:177] 
	W1212 20:09:29.765638   24019 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 20:09:29.767221   24019 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686513 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686513 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-686513 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.011454ms)

                                                
                                                
-- stdout --
	* [functional-686513] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:09:23.949335   23594 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:09:23.949456   23594 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:23.949464   23594 out.go:309] Setting ErrFile to fd 2...
	I1212 20:09:23.949469   23594 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:23.949850   23594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:09:23.950559   23594 out.go:303] Setting JSON to false
	I1212 20:09:23.951769   23594 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3118,"bootTime":1702408646,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:09:23.951868   23594 start.go:138] virtualization: kvm guest
	I1212 20:09:23.954742   23594 out.go:177] * [functional-686513] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1212 20:09:23.956959   23594 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:09:23.956957   23594 notify.go:220] Checking for updates...
	I1212 20:09:23.958528   23594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:23.960306   23594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:09:23.961990   23594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:09:23.963630   23594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:09:23.965179   23594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:09:23.967797   23594 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:09:23.968238   23594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:09:23.968278   23594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:09:23.984146   23594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46623
	I1212 20:09:23.984599   23594 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:09:23.985353   23594 main.go:141] libmachine: Using API Version  1
	I1212 20:09:23.985382   23594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:09:23.985770   23594 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:09:23.985993   23594 main.go:141] libmachine: (functional-686513) Calling .DriverName
	I1212 20:09:23.986261   23594 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:09:23.986621   23594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:09:23.986678   23594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:09:24.003649   23594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I1212 20:09:24.004129   23594 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:09:24.004617   23594 main.go:141] libmachine: Using API Version  1
	I1212 20:09:24.004648   23594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:09:24.005006   23594 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:09:24.005187   23594 main.go:141] libmachine: (functional-686513) Calling .DriverName
	I1212 20:09:24.041245   23594 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 20:09:24.042829   23594 start.go:298] selected driver: kvm2
	I1212 20:09:24.042850   23594 start.go:902] validating driver "kvm2" against &{Name:functional-686513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-686513 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.70 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 20:09:24.042996   23594 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:09:24.045718   23594 out.go:177] 
	W1212 20:09:24.047335   23594 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:09:24.048890   23594 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-686513 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-686513 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cpmqt" [ce4ef3f9-853a-42cf-bf6d-9623742df7c5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cpmqt" [ce4ef3f9-853a-42cf-bf6d-9623742df7c5] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.020116986s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.70:30739
functional_test.go:1674: http://192.168.50.70:30739: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cpmqt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.70:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.70:30739
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [51124554-776a-4695-b0c1-f8828d6b4da9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01587758s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-686513 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-686513 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-686513 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-686513 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-686513 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1920a4bb-433c-472a-b13d-724e5e5420e9] Pending
helpers_test.go:344: "sp-pod" [1920a4bb-433c-472a-b13d-724e5e5420e9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1920a4bb-433c-472a-b13d-724e5e5420e9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.051769215s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-686513 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-686513 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-686513 delete -f testdata/storage-provisioner/pod.yaml: (1.344913926s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-686513 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bbb155bf-707d-4d94-aec0-7c8dac66b034] Pending
helpers_test.go:344: "sp-pod" [bbb155bf-707d-4d94-aec0-7c8dac66b034] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bbb155bf-707d-4d94-aec0-7c8dac66b034] Running
2023/12/12 20:09:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.06473095s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-686513 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh -n functional-686513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cp functional-686513:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3343114874/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh -n functional-686513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh -n functional-686513 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-686513 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-dzdp4" [9291a722-d80f-4ef5-bc6c-fa4211f02dfa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-dzdp4" [9291a722-d80f-4ef5-bc6c-fa4211f02dfa] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.040912882s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-686513 exec mysql-859648c796-dzdp4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-686513 exec mysql-859648c796-dzdp4 -- mysql -ppassword -e "show databases;": exit status 1 (494.568771ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-686513 exec mysql-859648c796-dzdp4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-686513 exec mysql-859648c796-dzdp4 -- mysql -ppassword -e "show databases;": exit status 1 (433.923125ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-686513 exec mysql-859648c796-dzdp4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/16456/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /etc/test/nested/copy/16456/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/16456.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /etc/ssl/certs/16456.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/16456.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /usr/share/ca-certificates/16456.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/164562.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /etc/ssl/certs/164562.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/164562.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /usr/share/ca-certificates/164562.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-686513 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh "sudo systemctl is-active docker": exit status 1 (266.237121ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh "sudo systemctl is-active containerd": exit status 1 (301.264893ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686513 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-686513
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-686513
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686513 image ls --format short --alsologtostderr:
I1212 20:09:36.178487   24839 out.go:296] Setting OutFile to fd 1 ...
I1212 20:09:36.178787   24839 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.178797   24839 out.go:309] Setting ErrFile to fd 2...
I1212 20:09:36.178801   24839 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.179022   24839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
I1212 20:09:36.179833   24839 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.179980   24839 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.180461   24839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.180512   24839 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.195603   24839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
I1212 20:09:36.196115   24839 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.196721   24839 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.196745   24839 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.197103   24839 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.197270   24839 main.go:141] libmachine: (functional-686513) Calling .GetState
I1212 20:09:36.199522   24839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.199559   24839 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.214873   24839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36369
I1212 20:09:36.215381   24839 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.215808   24839 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.215831   24839 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.216401   24839 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.216539   24839 main.go:141] libmachine: (functional-686513) Calling .DriverName
I1212 20:09:36.216744   24839 ssh_runner.go:195] Run: systemctl --version
I1212 20:09:36.216775   24839 main.go:141] libmachine: (functional-686513) Calling .GetSSHHostname
I1212 20:09:36.219862   24839 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.220405   24839 main.go:141] libmachine: (functional-686513) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:77:4b", ip: ""} in network mk-functional-686513: {Iface:virbr1 ExpiryTime:2023-12-12 21:06:43 +0000 UTC Type:0 Mac:52:54:00:ce:77:4b Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:functional-686513 Clientid:01:52:54:00:ce:77:4b}
I1212 20:09:36.220721   24839 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined IP address 192.168.50.70 and MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.220543   24839 main.go:141] libmachine: (functional-686513) Calling .GetSSHPort
I1212 20:09:36.220892   24839 main.go:141] libmachine: (functional-686513) Calling .GetSSHKeyPath
I1212 20:09:36.221053   24839 main.go:141] libmachine: (functional-686513) Calling .GetSSHUsername
I1212 20:09:36.221849   24839 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/functional-686513/id_rsa Username:docker}
I1212 20:09:36.380116   24839 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 20:09:36.493520   24839 main.go:141] libmachine: Making call to close driver server
I1212 20:09:36.493537   24839 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:36.493830   24839 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:36.493855   24839 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 20:09:36.493867   24839 main.go:141] libmachine: Making call to close driver server
I1212 20:09:36.493877   24839 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:36.493830   24839 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:36.494145   24839 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:36.494163   24839 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686513 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-686513  | 3d8ef45cb4078 | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-686513  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686513 image ls --format table --alsologtostderr:
I1212 20:09:36.915723   24961 out.go:296] Setting OutFile to fd 1 ...
I1212 20:09:36.915890   24961 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.915900   24961 out.go:309] Setting ErrFile to fd 2...
I1212 20:09:36.915905   24961 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.916107   24961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
I1212 20:09:36.916708   24961 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.916831   24961 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.917216   24961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.917269   24961 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.933274   24961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41713
I1212 20:09:36.933771   24961 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.934449   24961 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.934473   24961 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.934842   24961 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.935029   24961 main.go:141] libmachine: (functional-686513) Calling .GetState
I1212 20:09:36.936912   24961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.936954   24961 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.951676   24961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
I1212 20:09:36.952050   24961 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.952526   24961 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.952547   24961 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.952848   24961 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.953033   24961 main.go:141] libmachine: (functional-686513) Calling .DriverName
I1212 20:09:36.953303   24961 ssh_runner.go:195] Run: systemctl --version
I1212 20:09:36.953328   24961 main.go:141] libmachine: (functional-686513) Calling .GetSSHHostname
I1212 20:09:36.956346   24961 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.956741   24961 main.go:141] libmachine: (functional-686513) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:77:4b", ip: ""} in network mk-functional-686513: {Iface:virbr1 ExpiryTime:2023-12-12 21:06:43 +0000 UTC Type:0 Mac:52:54:00:ce:77:4b Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:functional-686513 Clientid:01:52:54:00:ce:77:4b}
I1212 20:09:36.956781   24961 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined IP address 192.168.50.70 and MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.956942   24961 main.go:141] libmachine: (functional-686513) Calling .GetSSHPort
I1212 20:09:36.957137   24961 main.go:141] libmachine: (functional-686513) Calling .GetSSHKeyPath
I1212 20:09:36.957298   24961 main.go:141] libmachine: (functional-686513) Calling .GetSSHUsername
I1212 20:09:36.957425   24961 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/functional-686513/id_rsa Username:docker}
I1212 20:09:37.094982   24961 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 20:09:37.215937   24961 main.go:141] libmachine: Making call to close driver server
I1212 20:09:37.215961   24961 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:37.216218   24961 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:37.216239   24961 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 20:09:37.216257   24961 main.go:141] libmachine: Making call to close driver server
I1212 20:09:37.216272   24961 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:37.216610   24961 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:37.216616   24961 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:37.216653   24961 main.go:141] libmachine: Making call to close connection to plugin binary
E1212 20:09:39.384408   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:39.390394   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:39.400750   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:39.421041   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:39.461328   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:39.541717   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:39.702273   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:40.108190   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:40.749012   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:42.029370   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686513 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c",
"repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8
s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"3d8ef45cb40781fca1f4c7a36ca1d19ece9ece9f50ec058571e166f3c7dbfc8f","repoDigests":["localhost/minikube-local-cache-test@sha256:f567944e64aa544d5a529eed6f441bb687f32110e0041c698e71e6d7ddf17d7f"],"repoTags":["localhost/minikube-local-cache-test:functional-686513"],"size":"3345"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a28993043
98e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-686513"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gc
r.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"73de
b9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","rep
oDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686513 image ls --format json --alsologtostderr:
I1212 20:09:36.576095   24896 out.go:296] Setting OutFile to fd 1 ...
I1212 20:09:36.576405   24896 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.576419   24896 out.go:309] Setting ErrFile to fd 2...
I1212 20:09:36.576425   24896 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.576750   24896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
I1212 20:09:36.577619   24896 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.577776   24896 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.578307   24896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.578362   24896 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.593207   24896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
I1212 20:09:36.593685   24896 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.594259   24896 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.594282   24896 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.594606   24896 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.594954   24896 main.go:141] libmachine: (functional-686513) Calling .GetState
I1212 20:09:36.596887   24896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.596933   24896 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.611883   24896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
I1212 20:09:36.612289   24896 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.612741   24896 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.612763   24896 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.613123   24896 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.613295   24896 main.go:141] libmachine: (functional-686513) Calling .DriverName
I1212 20:09:36.613562   24896 ssh_runner.go:195] Run: systemctl --version
I1212 20:09:36.613595   24896 main.go:141] libmachine: (functional-686513) Calling .GetSSHHostname
I1212 20:09:36.616356   24896 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.616776   24896 main.go:141] libmachine: (functional-686513) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:77:4b", ip: ""} in network mk-functional-686513: {Iface:virbr1 ExpiryTime:2023-12-12 21:06:43 +0000 UTC Type:0 Mac:52:54:00:ce:77:4b Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:functional-686513 Clientid:01:52:54:00:ce:77:4b}
I1212 20:09:36.616818   24896 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined IP address 192.168.50.70 and MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.617174   24896 main.go:141] libmachine: (functional-686513) Calling .GetSSHPort
I1212 20:09:36.617428   24896 main.go:141] libmachine: (functional-686513) Calling .GetSSHKeyPath
I1212 20:09:36.617593   24896 main.go:141] libmachine: (functional-686513) Calling .GetSSHUsername
I1212 20:09:36.617718   24896 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/functional-686513/id_rsa Username:docker}
I1212 20:09:36.738541   24896 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 20:09:36.849723   24896 main.go:141] libmachine: Making call to close driver server
I1212 20:09:36.849739   24896 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:36.850068   24896 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:36.850073   24896 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:36.850114   24896 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 20:09:36.850143   24896 main.go:141] libmachine: Making call to close driver server
I1212 20:09:36.850158   24896 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:36.850373   24896 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:36.850386   24896 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686513 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: 3d8ef45cb40781fca1f4c7a36ca1d19ece9ece9f50ec058571e166f3c7dbfc8f
repoDigests:
- localhost/minikube-local-cache-test@sha256:f567944e64aa544d5a529eed6f441bb687f32110e0041c698e71e6d7ddf17d7f
repoTags:
- localhost/minikube-local-cache-test:functional-686513
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-686513
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686513 image ls --format yaml --alsologtostderr:
I1212 20:09:36.179188   24845 out.go:296] Setting OutFile to fd 1 ...
I1212 20:09:36.179457   24845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.179467   24845 out.go:309] Setting ErrFile to fd 2...
I1212 20:09:36.179472   24845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.179653   24845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
I1212 20:09:36.180157   24845 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.180263   24845 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.180661   24845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.180710   24845 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.195900   24845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
I1212 20:09:36.196303   24845 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.196861   24845 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.196889   24845 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.197254   24845 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.197427   24845 main.go:141] libmachine: (functional-686513) Calling .GetState
I1212 20:09:36.199387   24845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.199430   24845 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.215008   24845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
I1212 20:09:36.215453   24845 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.215991   24845 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.216015   24845 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.216327   24845 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.216594   24845 main.go:141] libmachine: (functional-686513) Calling .DriverName
I1212 20:09:36.216802   24845 ssh_runner.go:195] Run: systemctl --version
I1212 20:09:36.216828   24845 main.go:141] libmachine: (functional-686513) Calling .GetSSHHostname
I1212 20:09:36.220124   24845 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.220485   24845 main.go:141] libmachine: (functional-686513) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:77:4b", ip: ""} in network mk-functional-686513: {Iface:virbr1 ExpiryTime:2023-12-12 21:06:43 +0000 UTC Type:0 Mac:52:54:00:ce:77:4b Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:functional-686513 Clientid:01:52:54:00:ce:77:4b}
I1212 20:09:36.220512   24845 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined IP address 192.168.50.70 and MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.220728   24845 main.go:141] libmachine: (functional-686513) Calling .GetSSHPort
I1212 20:09:36.220893   24845 main.go:141] libmachine: (functional-686513) Calling .GetSSHKeyPath
I1212 20:09:36.221053   24845 main.go:141] libmachine: (functional-686513) Calling .GetSSHUsername
I1212 20:09:36.221205   24845 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/functional-686513/id_rsa Username:docker}
I1212 20:09:36.323852   24845 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 20:09:36.434109   24845 main.go:141] libmachine: Making call to close driver server
I1212 20:09:36.434127   24845 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:36.434471   24845 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:36.434526   24845 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:36.434546   24845 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 20:09:36.434559   24845 main.go:141] libmachine: Making call to close driver server
I1212 20:09:36.434571   24845 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:36.434817   24845 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:36.434972   24845 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:36.435007   24845 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh pgrep buildkitd: exit status 1 (270.673716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image build -t localhost/my-image:functional-686513 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image build -t localhost/my-image:functional-686513 testdata/build --alsologtostderr: (5.676199299s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686513 image build -t localhost/my-image:functional-686513 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 55704a20c10
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-686513
--> 0bda15c6f77
Successfully tagged localhost/my-image:functional-686513
0bda15c6f778eeae8a3b559612dc66500bcd0c9afe4c8eda82ebbfe1bebf0540
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686513 image build -t localhost/my-image:functional-686513 testdata/build --alsologtostderr:
I1212 20:09:36.783530   24938 out.go:296] Setting OutFile to fd 1 ...
I1212 20:09:36.783731   24938 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.783741   24938 out.go:309] Setting ErrFile to fd 2...
I1212 20:09:36.783746   24938 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 20:09:36.783945   24938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
I1212 20:09:36.784560   24938 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.785112   24938 config.go:182] Loaded profile config "functional-686513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 20:09:36.785585   24938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.785663   24938 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.801481   24938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
I1212 20:09:36.801952   24938 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.802561   24938 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.802587   24938 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.803011   24938 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.803225   24938 main.go:141] libmachine: (functional-686513) Calling .GetState
I1212 20:09:36.805233   24938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 20:09:36.805279   24938 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 20:09:36.820875   24938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
I1212 20:09:36.821482   24938 main.go:141] libmachine: () Calling .GetVersion
I1212 20:09:36.822086   24938 main.go:141] libmachine: Using API Version  1
I1212 20:09:36.822119   24938 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 20:09:36.822427   24938 main.go:141] libmachine: () Calling .GetMachineName
I1212 20:09:36.822642   24938 main.go:141] libmachine: (functional-686513) Calling .DriverName
I1212 20:09:36.822879   24938 ssh_runner.go:195] Run: systemctl --version
I1212 20:09:36.822906   24938 main.go:141] libmachine: (functional-686513) Calling .GetSSHHostname
I1212 20:09:36.826132   24938 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.826640   24938 main.go:141] libmachine: (functional-686513) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:77:4b", ip: ""} in network mk-functional-686513: {Iface:virbr1 ExpiryTime:2023-12-12 21:06:43 +0000 UTC Type:0 Mac:52:54:00:ce:77:4b Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:functional-686513 Clientid:01:52:54:00:ce:77:4b}
I1212 20:09:36.826671   24938 main.go:141] libmachine: (functional-686513) DBG | domain functional-686513 has defined IP address 192.168.50.70 and MAC address 52:54:00:ce:77:4b in network mk-functional-686513
I1212 20:09:36.826860   24938 main.go:141] libmachine: (functional-686513) Calling .GetSSHPort
I1212 20:09:36.827050   24938 main.go:141] libmachine: (functional-686513) Calling .GetSSHKeyPath
I1212 20:09:36.827219   24938 main.go:141] libmachine: (functional-686513) Calling .GetSSHUsername
I1212 20:09:36.827383   24938 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/functional-686513/id_rsa Username:docker}
I1212 20:09:36.943316   24938 build_images.go:151] Building image from path: /tmp/build.933574123.tar
I1212 20:09:36.943418   24938 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 20:09:36.983738   24938 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.933574123.tar
I1212 20:09:36.999486   24938 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.933574123.tar: stat -c "%s %y" /var/lib/minikube/build/build.933574123.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.933574123.tar': No such file or directory
I1212 20:09:36.999525   24938 ssh_runner.go:362] scp /tmp/build.933574123.tar --> /var/lib/minikube/build/build.933574123.tar (3072 bytes)
I1212 20:09:37.060545   24938 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.933574123
I1212 20:09:37.084723   24938 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.933574123 -xf /var/lib/minikube/build/build.933574123.tar
I1212 20:09:37.120719   24938 crio.go:297] Building image: /var/lib/minikube/build/build.933574123
I1212 20:09:37.120776   24938 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-686513 /var/lib/minikube/build/build.933574123 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 20:09:42.364662   24938 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-686513 /var/lib/minikube/build/build.933574123 --cgroup-manager=cgroupfs: (5.24385908s)
I1212 20:09:42.364732   24938 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.933574123
I1212 20:09:42.374217   24938 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.933574123.tar
I1212 20:09:42.382827   24938 build_images.go:207] Built localhost/my-image:functional-686513 from /tmp/build.933574123.tar
I1212 20:09:42.382864   24938 build_images.go:123] succeeded building to: functional-686513
I1212 20:09:42.382870   24938 build_images.go:124] failed building to: 
I1212 20:09:42.382895   24938 main.go:141] libmachine: Making call to close driver server
I1212 20:09:42.382913   24938 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:42.383232   24938 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:42.383254   24938 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:42.383270   24938 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 20:09:42.383281   24938 main.go:141] libmachine: Making call to close driver server
I1212 20:09:42.383291   24938 main.go:141] libmachine: (functional-686513) Calling .Close
I1212 20:09:42.383502   24938 main.go:141] libmachine: (functional-686513) DBG | Closing plugin on server side
I1212 20:09:42.383530   24938 main.go:141] libmachine: Successfully made call to close driver server
I1212 20:09:42.383550   24938 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.018249672s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-686513
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "272.38772ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "67.126402ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "312.737285ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "65.052683ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image load --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image load --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr: (6.447644996s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-686513
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image load --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image load --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr: (5.437064748s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image save gcr.io/google-containers/addon-resizer:functional-686513 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image save gcr.io/google-containers/addon-resizer:functional-686513 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.175323733s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-686513 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-686513 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-2dmdf" [3f9857d8-1cc9-400a-aab6-5371ad224077] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-2dmdf" [3f9857d8-1cc9-400a-aab6-5371ad224077] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.031012533s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdany-port3696760567/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702411764056092868" to /tmp/TestFunctionalparallelMountCmdany-port3696760567/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702411764056092868" to /tmp/TestFunctionalparallelMountCmdany-port3696760567/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702411764056092868" to /tmp/TestFunctionalparallelMountCmdany-port3696760567/001/test-1702411764056092868
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.308806ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 20:09 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 20:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 20:09 test-1702411764056092868
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh cat /mount-9p/test-1702411764056092868
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-686513 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7c958f3f-e56a-4a34-9009-2506e9f7a562] Pending
helpers_test.go:344: "busybox-mount" [7c958f3f-e56a-4a34-9009-2506e9f7a562] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7c958f3f-e56a-4a34-9009-2506e9f7a562] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7c958f3f-e56a-4a34-9009-2506e9f7a562] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.025534676s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-686513 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdany-port3696760567/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image rm gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.008421716s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-686513
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 image save --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 image save --daemon gcr.io/google-containers/addon-resizer:functional-686513 --alsologtostderr: (1.519396984s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-686513
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdspecific-port3180229304/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (305.791172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdspecific-port3180229304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh "sudo umount -f /mount-9p": exit status 1 (285.293514ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-686513 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdspecific-port3180229304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 service list: (1.347861734s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-686513 service list -o json: (1.376584417s)
functional_test.go:1493: Took "1.376707705s" to run "out/minikube-linux-amd64 -p functional-686513 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2071876245/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2071876245/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2071876245/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T" /mount1: exit status 1 (359.182435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-686513 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2071876245/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2071876245/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2071876245/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.70:30439
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-686513 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.70:30439
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-686513
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-686513
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-686513
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (105.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-435457 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1212 20:09:49.710728   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:09:59.951813   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:10:20.432430   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:11:01.393269   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-435457 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m45.361095502s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (105.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons enable ingress --alsologtostderr -v=5: (13.627989788s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-435457 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-172990 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1212 20:15:07.157511   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:15:18.357569   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-172990 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.824736023s)
--- PASS: TestJSONOutput/start/Command (58.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-172990 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-172990 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-172990 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-172990 --output=json --user=testUser: (7.100226692s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-421528 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-421528 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.15976ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff9440f6-a9c3-4de8-ab39-ab1d7c7e4bdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-421528] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"98366488-9976-4549-b284-a3b9a8bc27bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17734"}}
	{"specversion":"1.0","id":"cf0e91ac-afc3-4ac7-94c3-fae37693421c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"83bcec91-705d-45b5-b676-6cd454122a2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig"}}
	{"specversion":"1.0","id":"7749a60b-e239-4fc3-815d-6dd8c969078f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube"}}
	{"specversion":"1.0","id":"75c9e0e0-92de-451b-bc34-c8e9e20a1d6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0f4186c5-e9ee-4510-aade-5e1f2f06844c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"45e215c3-af8f-41c8-b19f-8b3a8246c749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-421528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-421528
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-618385 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-618385 --driver=kvm2  --container-runtime=crio: (47.138104576s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-621532 --driver=kvm2  --container-runtime=crio
E1212 20:16:40.278752   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:16:48.881271   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:48.886602   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:48.896936   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:48.917248   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:48.957590   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:49.037911   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:49.198368   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:49.519139   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:50.160212   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:51.440787   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:54.001521   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:16:59.122052   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:17:09.362567   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-621532 --driver=kvm2  --container-runtime=crio: (47.789294875s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-618385
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-621532
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-621532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-621532
helpers_test.go:175: Cleaning up "first-618385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-618385
--- PASS: TestMinikubeProfile (97.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-581866 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 20:17:29.843454   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-581866 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.203587128s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-581866 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-581866 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-600279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 20:18:10.803890   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-600279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.898561629s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-581866 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-600279
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-600279: (1.163968424s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-600279
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-600279: (20.59459333s)
--- PASS: TestMountStart/serial/RestartStopped (21.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-562818 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 20:18:56.433597   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:19:24.119202   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:19:32.726622   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:19:39.384728   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-562818 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.959537357s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-562818 -- rollout status deployment/busybox: (2.669277805s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-9wvsx -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-vbpn5 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-9wvsx -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-vbpn5 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-9wvsx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-562818 -- exec busybox-5bc68d56bd-vbpn5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.43s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-562818 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-562818 -v 3 --alsologtostderr: (44.098242786s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-562818 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp testdata/cp-test.txt multinode-562818:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1880154385/001/cp-test_multinode-562818.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818:/home/docker/cp-test.txt multinode-562818-m02:/home/docker/cp-test_multinode-562818_multinode-562818-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m02 "sudo cat /home/docker/cp-test_multinode-562818_multinode-562818-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818:/home/docker/cp-test.txt multinode-562818-m03:/home/docker/cp-test_multinode-562818_multinode-562818-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m03 "sudo cat /home/docker/cp-test_multinode-562818_multinode-562818-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp testdata/cp-test.txt multinode-562818-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1880154385/001/cp-test_multinode-562818-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818-m02:/home/docker/cp-test.txt multinode-562818:/home/docker/cp-test_multinode-562818-m02_multinode-562818.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818 "sudo cat /home/docker/cp-test_multinode-562818-m02_multinode-562818.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818-m02:/home/docker/cp-test.txt multinode-562818-m03:/home/docker/cp-test_multinode-562818-m02_multinode-562818-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m03 "sudo cat /home/docker/cp-test_multinode-562818-m02_multinode-562818-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp testdata/cp-test.txt multinode-562818-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1880154385/001/cp-test_multinode-562818-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt multinode-562818:/home/docker/cp-test_multinode-562818-m03_multinode-562818.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818 "sudo cat /home/docker/cp-test_multinode-562818-m03_multinode-562818.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 cp multinode-562818-m03:/home/docker/cp-test.txt multinode-562818-m02:/home/docker/cp-test_multinode-562818-m03_multinode-562818-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 ssh -n multinode-562818-m02 "sudo cat /home/docker/cp-test_multinode-562818-m03_multinode-562818-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-562818 node stop m03: (2.094155117s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-562818 status: exit status 7 (462.527825ms)

                                                
                                                
-- stdout --
	multinode-562818
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-562818-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-562818-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-562818 status --alsologtostderr: exit status 7 (461.277656ms)

                                                
                                                
-- stdout --
	multinode-562818
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-562818-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-562818-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:21:46.352720   32347 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:21:46.352874   32347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:21:46.352888   32347 out.go:309] Setting ErrFile to fd 2...
	I1212 20:21:46.352893   32347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:21:46.353096   32347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:21:46.353257   32347 out.go:303] Setting JSON to false
	I1212 20:21:46.353287   32347 mustload.go:65] Loading cluster: multinode-562818
	I1212 20:21:46.353409   32347 notify.go:220] Checking for updates...
	I1212 20:21:46.353670   32347 config.go:182] Loaded profile config "multinode-562818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:21:46.353684   32347 status.go:255] checking status of multinode-562818 ...
	I1212 20:21:46.354034   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.354100   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.375354   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I1212 20:21:46.375823   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.376455   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.376497   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.376823   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.377012   32347 main.go:141] libmachine: (multinode-562818) Calling .GetState
	I1212 20:21:46.378662   32347 status.go:330] multinode-562818 host status = "Running" (err=<nil>)
	I1212 20:21:46.378677   32347 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:21:46.378968   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.379002   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.393563   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36035
	I1212 20:21:46.393963   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.394561   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.394607   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.394925   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.395105   32347 main.go:141] libmachine: (multinode-562818) Calling .GetIP
	I1212 20:21:46.398085   32347 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:21:46.398495   32347 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:21:46.398527   32347 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:21:46.398639   32347 host.go:66] Checking if "multinode-562818" exists ...
	I1212 20:21:46.398949   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.398994   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.414005   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I1212 20:21:46.414437   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.414893   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.414916   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.415222   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.415451   32347 main.go:141] libmachine: (multinode-562818) Calling .DriverName
	I1212 20:21:46.415634   32347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:21:46.415673   32347 main.go:141] libmachine: (multinode-562818) Calling .GetSSHHostname
	I1212 20:21:46.418436   32347 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:21:46.418848   32347 main.go:141] libmachine: (multinode-562818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:49:23", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:19:06 +0000 UTC Type:0 Mac:52:54:00:25:49:23 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-562818 Clientid:01:52:54:00:25:49:23}
	I1212 20:21:46.418883   32347 main.go:141] libmachine: (multinode-562818) DBG | domain multinode-562818 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:49:23 in network mk-multinode-562818
	I1212 20:21:46.419036   32347 main.go:141] libmachine: (multinode-562818) Calling .GetSSHPort
	I1212 20:21:46.419231   32347 main.go:141] libmachine: (multinode-562818) Calling .GetSSHKeyPath
	I1212 20:21:46.419415   32347 main.go:141] libmachine: (multinode-562818) Calling .GetSSHUsername
	I1212 20:21:46.419568   32347 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818/id_rsa Username:docker}
	I1212 20:21:46.511347   32347 ssh_runner.go:195] Run: systemctl --version
	I1212 20:21:46.517735   32347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:21:46.531391   32347 kubeconfig.go:92] found "multinode-562818" server: "https://192.168.39.77:8443"
	I1212 20:21:46.531419   32347 api_server.go:166] Checking apiserver status ...
	I1212 20:21:46.531456   32347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:21:46.545610   32347 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	I1212 20:21:46.554367   32347 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod193a44f373aa39bf67a4fef20e3c8d27/crio-81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557"
	I1212 20:21:46.554440   32347 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod193a44f373aa39bf67a4fef20e3c8d27/crio-81e44fb96ef3b62e0c0184f30e2b29964e064d1b0d5896cf6dfb964983b4a557/freezer.state
	I1212 20:21:46.564721   32347 api_server.go:204] freezer state: "THAWED"
	I1212 20:21:46.564754   32347 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1212 20:21:46.569712   32347 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I1212 20:21:46.569744   32347 status.go:421] multinode-562818 apiserver status = Running (err=<nil>)
	I1212 20:21:46.569753   32347 status.go:257] multinode-562818 status: &{Name:multinode-562818 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:21:46.569770   32347 status.go:255] checking status of multinode-562818-m02 ...
	I1212 20:21:46.570072   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.570111   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.585767   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I1212 20:21:46.586284   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.586752   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.586783   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.587082   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.587275   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .GetState
	I1212 20:21:46.588976   32347 status.go:330] multinode-562818-m02 host status = "Running" (err=<nil>)
	I1212 20:21:46.589004   32347 host.go:66] Checking if "multinode-562818-m02" exists ...
	I1212 20:21:46.589403   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.589478   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.604219   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1212 20:21:46.604630   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.605041   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.605055   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.605384   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.605550   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .GetIP
	I1212 20:21:46.608537   32347 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:21:46.608956   32347 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:21:46.608983   32347 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:21:46.609204   32347 host.go:66] Checking if "multinode-562818-m02" exists ...
	I1212 20:21:46.609581   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.609617   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.624305   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I1212 20:21:46.624725   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.625243   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.625266   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.625651   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.625870   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .DriverName
	I1212 20:21:46.626085   32347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:21:46.626109   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHHostname
	I1212 20:21:46.629107   32347 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:21:46.629585   32347 main.go:141] libmachine: (multinode-562818-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:1b:cb", ip: ""} in network mk-multinode-562818: {Iface:virbr1 ExpiryTime:2023-12-12 21:20:14 +0000 UTC Type:0 Mac:52:54:00:33:1b:cb Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-562818-m02 Clientid:01:52:54:00:33:1b:cb}
	I1212 20:21:46.629622   32347 main.go:141] libmachine: (multinode-562818-m02) DBG | domain multinode-562818-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:33:1b:cb in network mk-multinode-562818
	I1212 20:21:46.629765   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHPort
	I1212 20:21:46.629945   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHKeyPath
	I1212 20:21:46.630082   32347 main.go:141] libmachine: (multinode-562818-m02) Calling .GetSSHUsername
	I1212 20:21:46.630261   32347 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17734-9188/.minikube/machines/multinode-562818-m02/id_rsa Username:docker}
	I1212 20:21:46.722602   32347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:21:46.736625   32347 status.go:257] multinode-562818-m02 status: &{Name:multinode-562818-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:21:46.736681   32347 status.go:255] checking status of multinode-562818-m03 ...
	I1212 20:21:46.737109   32347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 20:21:46.737164   32347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 20:21:46.753026   32347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I1212 20:21:46.753421   32347 main.go:141] libmachine: () Calling .GetVersion
	I1212 20:21:46.753824   32347 main.go:141] libmachine: Using API Version  1
	I1212 20:21:46.753843   32347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 20:21:46.754124   32347 main.go:141] libmachine: () Calling .GetMachineName
	I1212 20:21:46.754283   32347 main.go:141] libmachine: (multinode-562818-m03) Calling .GetState
	I1212 20:21:46.755820   32347 status.go:330] multinode-562818-m03 host status = "Stopped" (err=<nil>)
	I1212 20:21:46.755836   32347 status.go:343] host is not running, skipping remaining checks
	I1212 20:21:46.755843   32347 status.go:257] multinode-562818-m03 status: &{Name:multinode-562818-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 node start m03 --alsologtostderr
E1212 20:21:48.881099   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-562818 node start m03 --alsologtostderr: (28.548404839s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-562818 node delete m03: (1.027217591s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-562818 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 20:36:48.881423   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:38:56.433524   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 20:39:39.384409   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 20:41:48.880950   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 20:42:42.520636   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-562818 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.595058127s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-562818 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-562818
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-562818-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-562818-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.084539ms)

                                                
                                                
-- stdout --
	* [multinode-562818-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-562818-m02' is duplicated with machine name 'multinode-562818-m02' in profile 'multinode-562818'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-562818-m03 --driver=kvm2  --container-runtime=crio
E1212 20:43:56.433606   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-562818-m03 --driver=kvm2  --container-runtime=crio: (47.480852522s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-562818
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-562818: exit status 80 (236.893653ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-562818
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-562818-m03 already exists in multinode-562818-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-562818-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.84s)

                                                
                                    
x
+
TestScheduledStopUnix (117.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-072486 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-072486 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.849498841s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072486 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-072486 -n scheduled-stop-072486
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072486 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072486 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-072486 -n scheduled-stop-072486
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-072486
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072486 --schedule 15s
E1212 20:48:56.433042   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-072486
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-072486: exit status 7 (84.502523ms)

                                                
                                                
-- stdout --
	scheduled-stop-072486
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-072486 -n scheduled-stop-072486
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-072486 -n scheduled-stop-072486: exit status 7 (79.327018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-072486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-072486
--- PASS: TestScheduledStopUnix (117.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (194.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.269015094s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-334379
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-334379: (3.138538905s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-334379 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-334379 status --format={{.Host}}: exit status 7 (101.703431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.531976256s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-334379 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (101.127826ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-334379] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-334379
	    minikube start -p kubernetes-upgrade-334379 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3343792 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-334379 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-334379 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.19740467s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-334379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-334379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-334379: (1.218423885s)
--- PASS: TestKubernetesUpgrade (194.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-314635 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-314635 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (105.131327ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-314635] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (107.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-314635 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-314635 --driver=kvm2  --container-runtime=crio: (1m47.262444931s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-314635 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (107.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-314635 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-314635 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.798043746s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-314635 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-314635 status -o json: exit status 2 (295.058972ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-314635","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-314635
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-314635: (1.09286159s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-314635 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 20:51:48.881036   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-314635 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.241493074s)
--- PASS: TestNoKubernetes/serial/Start (28.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-314635 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-314635 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.383185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-314635
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-314635: (1.872903221s)
--- PASS: TestNoKubernetes/serial/Stop (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-314635 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-314635 --driver=kvm2  --container-runtime=crio: (22.427959984s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-314635 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-314635 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.022953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-690675 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-690675 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (494.330288ms)

                                                
                                                
-- stdout --
	* [false-690675] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:52:44.699902   43021 out.go:296] Setting OutFile to fd 1 ...
	I1212 20:52:44.700097   43021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:52:44.700110   43021 out.go:309] Setting ErrFile to fd 2...
	I1212 20:52:44.700127   43021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 20:52:44.700337   43021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17734-9188/.minikube/bin
	I1212 20:52:44.700985   43021 out.go:303] Setting JSON to false
	I1212 20:52:44.701936   43021 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5719,"bootTime":1702408646,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:52:44.702000   43021 start.go:138] virtualization: kvm guest
	I1212 20:52:44.704506   43021 out.go:177] * [false-690675] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 20:52:44.706295   43021 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 20:52:44.706337   43021 notify.go:220] Checking for updates...
	I1212 20:52:44.707942   43021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:52:44.709296   43021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17734-9188/kubeconfig
	I1212 20:52:44.710960   43021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17734-9188/.minikube
	I1212 20:52:44.712414   43021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:52:44.713915   43021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:52:44.715765   43021 config.go:182] Loaded profile config "force-systemd-flag-675766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 20:52:44.715858   43021 config.go:182] Loaded profile config "kubernetes-upgrade-334379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 20:52:44.715912   43021 config.go:182] Loaded profile config "stopped-upgrade-709141": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 20:52:44.715988   43021 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 20:52:45.119861   43021 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 20:52:45.121454   43021 start.go:298] selected driver: kvm2
	I1212 20:52:45.121476   43021 start.go:902] validating driver "kvm2" against <nil>
	I1212 20:52:45.121492   43021 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:52:45.123994   43021 out.go:177] 
	W1212 20:52:45.125498   43021 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 20:52:45.126942   43021 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-690675 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-690675" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-690675

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690675"

                                                
                                                
----------------------- debugLogs end: false-690675 [took: 3.477835375s] --------------------------------
helpers_test.go:175: Cleaning up "false-690675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-690675
--- PASS: TestNetworkPlugins/group/false (4.14s)

                                                
                                    
x
+
TestPause/serial/Start (103.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-062428 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-062428 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m43.747748986s)
--- PASS: TestPause/serial/Start (103.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-062428 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1212 20:54:39.384656   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-062428 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.505723808s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (55.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (117.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m57.659005295s)
--- PASS: TestNetworkPlugins/group/auto/Start (117.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-062428 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-062428 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-062428 --output=json --layout=cluster: exit status 2 (305.015924ms)

                                                
                                                
-- stdout --
	{"Name":"pause-062428","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-062428","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-062428 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-062428 --alsologtostderr -v=5: (1.179454976s)
--- PASS: TestPause/serial/Unpause (1.18s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.45s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-062428 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-062428 --alsologtostderr -v=5: (1.447091772s)
--- PASS: TestPause/serial/PauseAgain (1.45s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-062428 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-062428 --alsologtostderr -v=5: (1.2642489s)
--- PASS: TestPause/serial/DeletePaused (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.429879416s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m41.713595351s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-709141
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m59.380955043s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s5pps" [5f32328d-4930-464a-9fba-605af1c7537d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 20:56:48.880537   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s5pps" [5f32328d-4930-464a-9fba-605af1c7537d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.01211206s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.481388764s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-c7g8z" [51398039-fa15-4250-b5ae-243336a3fb7b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.033228457s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nh675" [1fcbe4ce-01e5-426e-8dcb-4ccf125c6ecf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nh675" [1fcbe4ce-01e5-426e-8dcb-4ccf125c6ecf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.017356275s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m42.987644998s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7xwdq" [071d0387-7b6b-486d-8ca8-9ac401ff9dc9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.038992179s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dzws6" [a9bb986a-452e-4674-88c2-b65a8c09146e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dzws6" [a9bb986a-452e-4674-88c2-b65a8c09146e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.015242237s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mz5wl" [ed48ed2e-ef36-491c-9797-428298a40951] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mz5wl" [ed48ed2e-ef36-491c-9797-428298a40951] Running
E1212 20:58:56.433067   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.017252802s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m30.32184792s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (110.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1212 20:59:22.521424   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-690675 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m50.924332002s)
--- PASS: TestNetworkPlugins/group/bridge/Start (110.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-372099 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1212 20:59:39.385288   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-372099 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m42.836036462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mtc5d" [8a9c6247-ceb3-4510-9c70-19706d536316] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mtc5d" [8a9c6247-ceb3-4510-9c70-19706d536316] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.01119534s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-343495 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-343495 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m31.037801033s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lkmlc" [0ab88b00-017b-4926-9e1e-9273153d94c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.030813282s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2bdqr" [db65ee64-fd38-4130-b02e-e451d440a7e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2bdqr" [db65ee64-fd38-4130-b02e-e451d440a7e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.020301315s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-831188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-831188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m6.538029061s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-690675 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-690675 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hbrlp" [c106a0c9-1595-47ab-9f0a-7cb297946ff9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hbrlp" [c106a0c9-1595-47ab-9f0a-7cb297946ff9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.025921324s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-690675 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-690675 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E1212 21:30:20.139085   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-171828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-171828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m42.40491186s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-343495 create -f testdata/busybox.yaml
E1212 21:01:45.697557   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:45.702824   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:45.713078   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:45.733375   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:45.773664   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [afbe0f5e-7e3a-4e85-9323-89a7c75248a5] Pending
E1212 21:01:45.853967   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:46.014903   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:46.335528   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
helpers_test.go:344: "busybox" [afbe0f5e-7e3a-4e85-9323-89a7c75248a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 21:01:46.976628   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:01:48.257852   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
helpers_test.go:344: "busybox" [afbe0f5e-7e3a-4e85-9323-89a7c75248a5] Running
E1212 21:01:48.880731   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:01:50.818061   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.032137631s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-343495 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-343495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-343495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005604706s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-343495 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-831188 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3f151c8-69ac-4783-b525-035f3955a799] Pending
helpers_test.go:344: "busybox" [c3f151c8-69ac-4783-b525-035f3955a799] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 21:02:06.179211   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c3f151c8-69ac-4783-b525-035f3955a799] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.030958826s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-831188 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-831188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-831188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.708944553s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-831188 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-372099 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0241166c-9425-4d66-8850-8aab7f9cb630] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0241166c-9425-4d66-8850-8aab7f9cb630] Running
E1212 21:02:22.810369   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:22.815693   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:22.826049   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:22.846992   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:22.887500   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:22.968482   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:23.128931   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:02:23.449241   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.040705764s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-372099 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-372099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 21:02:24.090087   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-372099 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2951bd10-8d18-4fbf-a012-312a24ff975d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 21:03:22.599022   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2951bd10-8d18-4fbf-a012-312a24ff975d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.043289115s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-171828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-171828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093464444s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-171828 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (704.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-343495 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1212 21:04:29.540451   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:04:34.281138   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-343495 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m44.675228723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-343495 -n no-preload-343495
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (704.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (570.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-831188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-831188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m29.99755043s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831188 -n embed-certs-831188
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (570.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (706.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-372099 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1212 21:05:03.355870   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:05:06.654491   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:05:07.724783   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:05:20.138842   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.144140   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.154442   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.174737   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.215011   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.295356   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.455789   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:20.776396   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:21.416740   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:22.697477   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:23.837049   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:05:25.258303   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:05:30.379356   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-372099 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m45.991524877s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-372099 -n old-k8s-version-372099
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (706.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (545.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-171828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 21:06:04.798236   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:06:06.483937   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:06.489242   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:06.499483   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:06.519752   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:06.560052   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:06.640427   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:06.800924   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:07.121527   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:07.762494   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:09.042708   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:11.604334   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:16.725312   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:26.965957   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:29.645359   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:06:31.930306   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:06:42.063383   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:06:45.697344   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:06:47.447030   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:06:48.880760   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:07:13.380959   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:07:22.809904   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:07:26.719227   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:07:28.407336   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:07:50.495039   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:08:03.984567   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:08:12.359408   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:08:40.041590   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:08:45.800873   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:08:50.327913   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:08:56.432700   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
E1212 21:09:13.485562   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:09:39.384615   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:09:42.875349   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:10:10.560321   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
E1212 21:10:20.138660   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:10:47.825814   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/flannel-690675/client.crt: no such file or directory
E1212 21:11:06.483268   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:11:34.168734   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/bridge-690675/client.crt: no such file or directory
E1212 21:11:45.697545   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/auto-690675/client.crt: no such file or directory
E1212 21:11:48.881596   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/ingress-addon-legacy-435457/client.crt: no such file or directory
E1212 21:12:22.810728   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kindnet-690675/client.crt: no such file or directory
E1212 21:13:12.359057   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/calico-690675/client.crt: no such file or directory
E1212 21:13:45.801196   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/custom-flannel-690675/client.crt: no such file or directory
E1212 21:13:56.433363   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/functional-686513/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-171828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m5.424602463s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171828 -n default-k8s-diff-port-171828
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (545.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-422706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1212 21:29:39.385124   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/addons-459174/client.crt: no such file or directory
E1212 21:29:42.874917   16456 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/enable-default-cni-690675/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-422706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m2.020106909s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-422706 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-422706 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-422706 --alsologtostderr -v=3: (3.115656537s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-422706 -n newest-cni-422706
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-422706 -n newest-cni-422706: exit status 7 (75.053144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-422706 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-422706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-422706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (49.596217899s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-422706 -n newest-cni-422706
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-422706 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-422706 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-422706 -n newest-cni-422706
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-422706 -n newest-cni-422706: exit status 2 (259.484039ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-422706 -n newest-cni-422706
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-422706 -n newest-cni-422706: exit status 2 (248.373523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-422706 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-422706 -n newest-cni-422706
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-422706 -n newest-cni-422706
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    

Test skip (39/305)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
252 TestNetworkPlugins/group/kubenet 4.04
260 TestNetworkPlugins/group/cilium 4.38
268 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-690675 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-690675" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17734-9188/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 20:52:42 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.208:8443
name: kubernetes-upgrade-334379
contexts:
- context:
cluster: kubernetes-upgrade-334379
extensions:
- extension:
last-update: Tue, 12 Dec 2023 20:52:42 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-334379
name: kubernetes-upgrade-334379
current-context: kubernetes-upgrade-334379
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-334379
user:
client-certificate: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kubernetes-upgrade-334379/client.crt
client-key: /home/jenkins/minikube-integration/17734-9188/.minikube/profiles/kubernetes-upgrade-334379/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-690675

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690675"

                                                
                                                
----------------------- debugLogs end: kubenet-690675 [took: 3.862179198s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-690675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-690675
--- SKIP: TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-690675 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-690675" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-690675

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-690675" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690675"

                                                
                                                
----------------------- debugLogs end: cilium-690675 [took: 4.202983339s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-690675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-690675
--- SKIP: TestNetworkPlugins/group/cilium (4.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-741087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-741087
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard